00:00:00.001 Started by upstream project "autotest-per-patch" build number 130918 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.037 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.038 The recommended git tool is: git 00:00:00.038 using credential 00000000-0000-0000-0000-000000000002 00:00:00.040 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.064 Fetching changes from the remote Git repository 00:00:00.065 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.106 Using shallow fetch with depth 1 00:00:00.106 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.106 > git --version # timeout=10 00:00:00.148 > git --version # 'git version 2.39.2' 00:00:00.149 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.188 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.188 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.557 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.568 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.582 Checking out Revision bc56972291bf21b4d2a602b495a165146a8d67a1 (FETCH_HEAD) 00:00:05.582 > git config core.sparsecheckout # timeout=10 00:00:05.593 > git read-tree -mu HEAD # timeout=10 00:00:05.609 > git checkout -f bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=5 00:00:05.628 Commit message: "jenkins/jjb-config: Remove extendedChoice from ipxe-test-images" 00:00:05.628 > git rev-list --no-walk bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=10 00:00:05.729 [Pipeline] Start of Pipeline 00:00:05.741 [Pipeline] library 00:00:05.743 Loading library shm_lib@master 00:00:05.743 Library shm_lib@master is cached. Copying from home. 00:00:05.762 [Pipeline] node 00:00:05.774 Running on WFP34 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:05.775 [Pipeline] { 00:00:05.784 [Pipeline] catchError 00:00:05.785 [Pipeline] { 00:00:05.797 [Pipeline] wrap 00:00:05.806 [Pipeline] { 00:00:05.814 [Pipeline] stage 00:00:05.816 [Pipeline] { (Prologue) 00:00:06.026 [Pipeline] sh 00:00:06.307 + logger -p user.info -t JENKINS-CI 00:00:06.325 [Pipeline] echo 00:00:06.327 Node: WFP34 00:00:06.334 [Pipeline] sh 00:00:06.634 [Pipeline] setCustomBuildProperty 00:00:06.647 [Pipeline] echo 00:00:06.648 Cleanup processes 00:00:06.653 [Pipeline] sh 00:00:06.940 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.941 3185611 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.954 [Pipeline] sh 00:00:07.242 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:07.242 ++ grep -v 'sudo pgrep' 00:00:07.242 ++ awk '{print $1}' 00:00:07.242 + sudo kill -9 00:00:07.242 + true 00:00:07.254 [Pipeline] cleanWs 00:00:07.263 [WS-CLEANUP] Deleting project workspace... 00:00:07.263 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.269 [WS-CLEANUP] done 00:00:07.274 [Pipeline] setCustomBuildProperty 00:00:07.290 [Pipeline] sh 00:00:07.576 + sudo git config --global --replace-all safe.directory '*' 00:00:07.670 [Pipeline] httpRequest 00:00:08.065 [Pipeline] echo 00:00:08.067 Sorcerer 10.211.164.101 is alive 00:00:08.076 [Pipeline] retry 00:00:08.078 [Pipeline] { 00:00:08.093 [Pipeline] httpRequest 00:00:08.097 HttpMethod: GET 00:00:08.098 URL: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:08.098 Sending request to url: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:08.120 Response Code: HTTP/1.1 200 OK 00:00:08.120 Success: Status code 200 is in the accepted range: 200,404 00:00:08.120 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:15.415 [Pipeline] } 00:00:15.448 [Pipeline] // retry 00:00:15.453 [Pipeline] sh 00:00:15.733 + tar --no-same-owner -xf jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:15.748 [Pipeline] httpRequest 00:00:16.120 [Pipeline] echo 00:00:16.122 Sorcerer 10.211.164.101 is alive 00:00:16.131 [Pipeline] retry 00:00:16.133 [Pipeline] { 00:00:16.147 [Pipeline] httpRequest 00:00:16.151 HttpMethod: GET 00:00:16.152 URL: http://10.211.164.101/packages/spdk_8ce2f3c7de36646aa8b5534665fe2b12e0819d1f.tar.gz 00:00:16.152 Sending request to url: http://10.211.164.101/packages/spdk_8ce2f3c7de36646aa8b5534665fe2b12e0819d1f.tar.gz 00:00:16.160 Response Code: HTTP/1.1 200 OK 00:00:16.161 Success: Status code 200 is in the accepted range: 200,404 00:00:16.161 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_8ce2f3c7de36646aa8b5534665fe2b12e0819d1f.tar.gz 00:01:13.850 [Pipeline] } 00:01:13.867 [Pipeline] // retry 00:01:13.874 [Pipeline] sh 00:01:14.158 + tar --no-same-owner -xf spdk_8ce2f3c7de36646aa8b5534665fe2b12e0819d1f.tar.gz 00:01:16.710 [Pipeline] sh 00:01:16.997 + git -C spdk log --oneline -n5 00:01:16.997 8ce2f3c7d util: handle events for vfio fd type 00:01:16.997 381b6895f util: Extended options for spdk_fd_group_add 00:01:16.997 42d568143 nvme: interface to retrieve fd for a queue 00:01:16.997 21b5d8b71 nvme: enable interrupts for pcie nvme devices 00:01:16.997 dd150dfd4 nvme: Add transport interface to enable interrupts 00:01:17.009 [Pipeline] } 00:01:17.023 [Pipeline] // stage 00:01:17.031 [Pipeline] stage 00:01:17.033 [Pipeline] { (Prepare) 00:01:17.049 [Pipeline] writeFile 00:01:17.063 [Pipeline] sh 00:01:17.348 + logger -p user.info -t JENKINS-CI 00:01:17.361 [Pipeline] sh 00:01:17.651 + logger -p user.info -t JENKINS-CI 00:01:17.662 [Pipeline] sh 00:01:17.945 + cat autorun-spdk.conf 00:01:17.945 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.945 SPDK_TEST_NVMF=1 00:01:17.945 SPDK_TEST_NVME_CLI=1 00:01:17.945 SPDK_TEST_NVMF_NICS=mlx5 00:01:17.945 SPDK_RUN_UBSAN=1 00:01:17.945 NET_TYPE=phy 00:01:17.952 RUN_NIGHTLY=0 00:01:17.956 [Pipeline] readFile 00:01:17.978 [Pipeline] withEnv 00:01:17.980 [Pipeline] { 00:01:17.992 [Pipeline] sh 00:01:18.275 + set -ex 00:01:18.275 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:01:18.275 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:18.275 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.275 ++ SPDK_TEST_NVMF=1 00:01:18.275 ++ SPDK_TEST_NVME_CLI=1 00:01:18.275 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:18.275 ++ SPDK_RUN_UBSAN=1 00:01:18.275 ++ NET_TYPE=phy 00:01:18.275 ++ RUN_NIGHTLY=0 00:01:18.275 + case $SPDK_TEST_NVMF_NICS in 00:01:18.275 + DRIVERS=mlx5_ib 00:01:18.275 + [[ -n mlx5_ib ]] 00:01:18.275 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:18.275 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:24.844 rmmod: ERROR: Module irdma is not currently loaded 00:01:24.844 rmmod: ERROR: Module i40iw is not currently loaded 00:01:24.844 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:24.844 + true 00:01:24.844 + for D in $DRIVERS 00:01:24.844 + sudo modprobe mlx5_ib 00:01:24.844 + exit 0 00:01:24.853 [Pipeline] } 00:01:24.869 [Pipeline] // withEnv 00:01:24.874 [Pipeline] } 00:01:24.887 [Pipeline] // stage 00:01:24.897 [Pipeline] catchError 00:01:24.899 [Pipeline] { 00:01:24.912 [Pipeline] timeout 00:01:24.912 Timeout set to expire in 1 hr 0 min 00:01:24.914 [Pipeline] { 00:01:24.928 [Pipeline] stage 00:01:24.931 [Pipeline] { (Tests) 00:01:24.946 [Pipeline] sh 00:01:25.233 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:01:25.233 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:01:25.233 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:01:25.233 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:01:25.233 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:25.233 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:01:25.233 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:01:25.233 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:25.233 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:01:25.233 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:25.233 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:01:25.233 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:01:25.233 + source /etc/os-release 00:01:25.233 ++ NAME='Fedora Linux' 00:01:25.233 ++ VERSION='39 (Cloud Edition)' 00:01:25.233 ++ ID=fedora 00:01:25.233 ++ VERSION_ID=39 00:01:25.233 ++ VERSION_CODENAME= 00:01:25.233 ++ PLATFORM_ID=platform:f39 00:01:25.233 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:25.233 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:25.233 ++ LOGO=fedora-logo-icon 00:01:25.233 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:25.233 ++ HOME_URL=https://fedoraproject.org/ 00:01:25.233 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:25.233 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:25.233 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:25.233 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:25.233 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:25.233 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:25.233 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:25.233 ++ SUPPORT_END=2024-11-12 00:01:25.233 ++ VARIANT='Cloud Edition' 00:01:25.233 ++ VARIANT_ID=cloud 00:01:25.233 + uname -a 00:01:25.233 Linux spdk-wfp-34 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:25.233 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:01:28.525 Hugepages 00:01:28.525 node hugesize free / total 00:01:28.525 node0 1048576kB 0 / 0 00:01:28.525 node0 2048kB 0 / 0 00:01:28.525 node1 1048576kB 0 / 0 00:01:28.525 node1 2048kB 0 / 0 00:01:28.525 00:01:28.525 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:28.525 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:28.525 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:28.525 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:28.525 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:28.525 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:28.525 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:28.525 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:28.525 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:28.525 NVMe 0000:5f:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:28.525 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:28.525 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:28.525 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:28.525 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:28.525 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:28.525 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:28.525 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:28.525 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:28.525 + rm -f /tmp/spdk-ld-path 00:01:28.525 + source autorun-spdk.conf 00:01:28.525 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.525 ++ SPDK_TEST_NVMF=1 00:01:28.525 ++ SPDK_TEST_NVME_CLI=1 00:01:28.525 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:28.525 ++ SPDK_RUN_UBSAN=1 00:01:28.525 ++ NET_TYPE=phy 00:01:28.525 ++ RUN_NIGHTLY=0 00:01:28.525 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:28.525 + [[ -n '' ]] 00:01:28.525 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:28.525 + for M in /var/spdk/build-*-manifest.txt 00:01:28.525 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:28.525 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:28.525 + for M in /var/spdk/build-*-manifest.txt 00:01:28.525 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:28.525 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:28.525 + for M in /var/spdk/build-*-manifest.txt 00:01:28.525 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:28.525 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:28.525 ++ uname 00:01:28.525 + [[ Linux == \L\i\n\u\x ]] 00:01:28.525 + sudo dmesg -T 00:01:28.525 + sudo dmesg --clear 00:01:28.525 + dmesg_pid=3186993 00:01:28.525 + [[ Fedora Linux == FreeBSD ]] 00:01:28.525 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:28.525 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:28.525 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:28.525 + [[ -x /usr/src/fio-static/fio ]] 00:01:28.525 + sudo dmesg -Tw 00:01:28.525 + export FIO_BIN=/usr/src/fio-static/fio 00:01:28.525 + FIO_BIN=/usr/src/fio-static/fio 00:01:28.525 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:28.525 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:28.525 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:28.525 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:28.525 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:28.525 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:28.525 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:28.525 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:28.525 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:28.525 Test configuration: 00:01:28.525 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.525 SPDK_TEST_NVMF=1 00:01:28.525 SPDK_TEST_NVME_CLI=1 00:01:28.525 SPDK_TEST_NVMF_NICS=mlx5 00:01:28.525 SPDK_RUN_UBSAN=1 00:01:28.525 NET_TYPE=phy 00:01:28.525 RUN_NIGHTLY=0 18:06:41 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:01:28.525 18:06:41 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:28.525 18:06:41 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:28.525 18:06:41 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:28.525 18:06:41 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:28.525 18:06:41 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:28.525 18:06:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.525 18:06:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.525 18:06:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.525 18:06:41 -- paths/export.sh@5 -- $ export PATH 00:01:28.525 18:06:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.525 18:06:41 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:28.525 18:06:41 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:28.525 18:06:41 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728403601.XXXXXX 00:01:28.526 18:06:41 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728403601.cSBdSt 00:01:28.526 18:06:41 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:28.526 18:06:41 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:28.526 18:06:41 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:01:28.526 18:06:41 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:28.526 18:06:41 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:28.526 18:06:41 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:28.526 18:06:41 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:28.526 18:06:41 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.526 18:06:41 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:28.526 18:06:41 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:28.526 18:06:41 -- pm/common@17 -- $ local monitor 00:01:28.526 18:06:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.526 18:06:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.526 18:06:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.526 18:06:41 -- pm/common@21 -- $ date +%s 00:01:28.526 18:06:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.526 18:06:41 -- pm/common@21 -- $ date +%s 00:01:28.526 18:06:41 -- pm/common@25 -- $ sleep 1 00:01:28.526 18:06:41 -- pm/common@21 -- $ date +%s 00:01:28.526 18:06:41 -- pm/common@21 -- $ date +%s 00:01:28.526 18:06:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728403601 00:01:28.526 18:06:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728403601 00:01:28.526 18:06:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728403601 00:01:28.526 18:06:41 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728403601 00:01:28.526 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728403601_collect-cpu-load.pm.log 00:01:28.526 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728403601_collect-vmstat.pm.log 00:01:28.526 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728403601_collect-cpu-temp.pm.log 00:01:28.526 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728403601_collect-bmc-pm.bmc.pm.log 00:01:29.464 18:06:42 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:29.464 18:06:42 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:29.464 18:06:42 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:29.464 18:06:42 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:29.464 18:06:42 -- spdk/autobuild.sh@16 -- $ date -u 00:01:29.464 Tue Oct 8 04:06:42 PM UTC 2024 00:01:29.464 18:06:42 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:29.464 v25.01-pre-48-g8ce2f3c7d 00:01:29.464 18:06:42 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:29.464 18:06:42 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:29.464 18:06:42 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:29.464 18:06:42 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:29.464 18:06:42 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:29.464 18:06:42 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.724 ************************************ 00:01:29.724 START TEST ubsan 00:01:29.724 ************************************ 00:01:29.724 18:06:42 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:29.724 using ubsan 00:01:29.724 00:01:29.724 real 0m0.001s 00:01:29.724 user 0m0.000s 00:01:29.724 sys 0m0.000s 00:01:29.724 18:06:42 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:29.724 18:06:42 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:29.724 ************************************ 00:01:29.724 END TEST ubsan 00:01:29.724 ************************************ 00:01:29.724 18:06:42 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:29.724 18:06:42 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:29.724 18:06:42 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:29.724 18:06:42 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:29.724 18:06:42 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:29.724 18:06:42 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:29.724 18:06:42 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:29.724 18:06:42 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:29.724 18:06:42 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:29.724 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:01:29.724 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:29.983 Using 'verbs' RDMA provider 00:01:43.689 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:58.838 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:58.838 Creating mk/config.mk...done. 00:01:58.838 Creating mk/cc.flags.mk...done. 00:01:58.838 Type 'make' to build. 00:01:58.838 18:07:11 -- spdk/autobuild.sh@70 -- $ run_test make make -j72 00:01:58.838 18:07:11 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:58.838 18:07:11 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:58.838 18:07:11 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.838 ************************************ 00:01:58.838 START TEST make 00:01:58.838 ************************************ 00:01:58.838 18:07:11 make -- common/autotest_common.sh@1125 -- $ make -j72 00:01:58.838 make[1]: Nothing to be done for 'all'. 00:02:08.897 The Meson build system 00:02:08.897 Version: 1.5.0 00:02:08.897 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:02:08.897 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:02:08.897 Build type: native build 00:02:08.897 Program cat found: YES (/usr/bin/cat) 00:02:08.897 Project name: DPDK 00:02:08.897 Project version: 24.03.0 00:02:08.897 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:08.897 C linker for the host machine: cc ld.bfd 2.40-14 00:02:08.897 Host machine cpu family: x86_64 00:02:08.897 Host machine cpu: x86_64 00:02:08.897 Message: ## Building in Developer Mode ## 00:02:08.897 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:08.897 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:08.897 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:08.897 Program python3 found: YES (/usr/bin/python3) 00:02:08.897 Program cat found: YES (/usr/bin/cat) 00:02:08.897 Compiler for C supports arguments -march=native: YES 00:02:08.897 Checking for size of "void *" : 8 00:02:08.897 Checking for size of "void *" : 8 (cached) 00:02:08.897 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:08.897 Library m found: YES 00:02:08.897 Library numa found: YES 00:02:08.897 Has header "numaif.h" : YES 00:02:08.897 Library fdt found: NO 00:02:08.897 Library execinfo found: NO 00:02:08.897 Has header "execinfo.h" : YES 00:02:08.897 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:08.897 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:08.897 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:08.897 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:08.897 Run-time dependency openssl found: YES 3.1.1 00:02:08.897 Run-time dependency libpcap found: YES 1.10.4 00:02:08.897 Has header "pcap.h" with dependency libpcap: YES 00:02:08.897 Compiler for C supports arguments -Wcast-qual: YES 00:02:08.897 Compiler for C supports arguments -Wdeprecated: YES 00:02:08.897 Compiler for C supports arguments -Wformat: YES 00:02:08.897 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:08.897 Compiler for C supports arguments -Wformat-security: NO 00:02:08.897 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:08.897 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:08.898 Compiler for C supports arguments -Wnested-externs: YES 00:02:08.898 Compiler for C supports arguments -Wold-style-definition: YES 00:02:08.898 Compiler for C supports arguments -Wpointer-arith: YES 00:02:08.898 Compiler for C supports arguments -Wsign-compare: YES 00:02:08.898 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:08.898 Compiler for C supports arguments -Wundef: YES 00:02:08.898 Compiler for C supports arguments -Wwrite-strings: YES 00:02:08.898 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:08.898 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:08.898 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:08.898 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:08.898 Program objdump found: YES (/usr/bin/objdump) 00:02:08.898 Compiler for C supports arguments -mavx512f: YES 00:02:08.898 Checking if "AVX512 checking" compiles: YES 00:02:08.898 Fetching value of define "__SSE4_2__" : 1 00:02:08.898 Fetching value of define "__AES__" : 1 00:02:08.898 Fetching value of define "__AVX__" : 1 00:02:08.898 Fetching value of define "__AVX2__" : 1 00:02:08.898 Fetching value of define "__AVX512BW__" : 1 00:02:08.898 Fetching value of define "__AVX512CD__" : 1 00:02:08.898 Fetching value of define "__AVX512DQ__" : 1 00:02:08.898 Fetching value of define "__AVX512F__" : 1 00:02:08.898 Fetching value of define "__AVX512VL__" : 1 00:02:08.898 Fetching value of define "__PCLMUL__" : 1 00:02:08.898 Fetching value of define "__RDRND__" : 1 00:02:08.898 Fetching value of define "__RDSEED__" : 1 00:02:08.898 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:08.898 Fetching value of define "__znver1__" : (undefined) 00:02:08.898 Fetching value of define "__znver2__" : (undefined) 00:02:08.898 Fetching value of define "__znver3__" : (undefined) 00:02:08.898 Fetching value of define "__znver4__" : (undefined) 00:02:08.898 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:08.898 Message: lib/log: Defining dependency "log" 00:02:08.898 Message: lib/kvargs: Defining dependency "kvargs" 00:02:08.898 Message: lib/telemetry: Defining dependency "telemetry" 00:02:08.898 Checking for function "getentropy" : NO 00:02:08.898 Message: lib/eal: Defining dependency "eal" 00:02:08.898 Message: lib/ring: Defining dependency "ring" 00:02:08.898 Message: lib/rcu: Defining dependency "rcu" 00:02:08.898 Message: lib/mempool: Defining dependency "mempool" 00:02:08.898 Message: lib/mbuf: Defining dependency "mbuf" 00:02:08.898 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:08.898 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:08.898 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:08.898 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:08.898 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:08.898 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:08.898 Compiler for C supports arguments -mpclmul: YES 00:02:08.898 Compiler for C supports arguments -maes: YES 00:02:08.898 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:08.898 Compiler for C supports arguments -mavx512bw: YES 00:02:08.898 Compiler for C supports arguments -mavx512dq: YES 00:02:08.898 Compiler for C supports arguments -mavx512vl: YES 00:02:08.898 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:08.898 Compiler for C supports arguments -mavx2: YES 00:02:08.898 Compiler for C supports arguments -mavx: YES 00:02:08.898 Message: lib/net: Defining dependency "net" 00:02:08.898 Message: lib/meter: Defining dependency "meter" 00:02:08.898 Message: lib/ethdev: Defining dependency "ethdev" 00:02:08.898 Message: lib/pci: Defining dependency "pci" 00:02:08.898 Message: lib/cmdline: Defining dependency "cmdline" 00:02:08.898 Message: lib/hash: Defining dependency "hash" 00:02:08.898 Message: lib/timer: Defining dependency "timer" 00:02:08.898 Message: lib/compressdev: Defining dependency "compressdev" 00:02:08.898 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:08.898 Message: lib/dmadev: Defining dependency "dmadev" 00:02:08.898 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:08.898 Message: lib/power: Defining dependency "power" 00:02:08.898 Message: lib/reorder: Defining dependency "reorder" 00:02:08.898 Message: lib/security: Defining dependency "security" 00:02:08.898 Has header "linux/userfaultfd.h" : YES 00:02:08.898 Has header "linux/vduse.h" : YES 00:02:08.898 Message: lib/vhost: Defining dependency "vhost" 00:02:08.898 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:08.898 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:08.898 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:08.898 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:08.898 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:08.898 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:08.898 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:08.898 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:08.898 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:08.898 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:08.898 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:08.898 Configuring doxy-api-html.conf using configuration 00:02:08.898 Configuring doxy-api-man.conf using configuration 00:02:08.898 Program mandb found: YES (/usr/bin/mandb) 00:02:08.898 Program sphinx-build found: NO 00:02:08.898 Configuring rte_build_config.h using configuration 00:02:08.898 Message: 00:02:08.898 ================= 00:02:08.898 Applications Enabled 00:02:08.898 ================= 00:02:08.898 00:02:08.898 apps: 00:02:08.898 00:02:08.898 00:02:08.898 Message: 00:02:08.898 ================= 00:02:08.898 Libraries Enabled 00:02:08.898 ================= 00:02:08.898 00:02:08.898 libs: 00:02:08.898 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:08.898 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:08.898 cryptodev, dmadev, power, reorder, security, vhost, 00:02:08.898 00:02:08.898 Message: 00:02:08.898 =============== 00:02:08.898 Drivers Enabled 00:02:08.898 =============== 00:02:08.898 00:02:08.898 common: 00:02:08.898 00:02:08.898 bus: 00:02:08.898 pci, vdev, 00:02:08.898 mempool: 00:02:08.898 ring, 00:02:08.898 dma: 00:02:08.898 00:02:08.898 net: 00:02:08.898 00:02:08.898 crypto: 00:02:08.898 00:02:08.898 compress: 00:02:08.898 00:02:08.898 vdpa: 00:02:08.898 00:02:08.898 00:02:08.898 Message: 00:02:08.898 ================= 00:02:08.898 Content Skipped 00:02:08.898 ================= 00:02:08.898 00:02:08.898 apps: 00:02:08.898 dumpcap: explicitly disabled via build config 00:02:08.898 graph: explicitly disabled via build config 00:02:08.898 pdump: explicitly disabled via build config 00:02:08.898 proc-info: explicitly disabled via build config 00:02:08.898 test-acl: explicitly disabled via build config 00:02:08.898 test-bbdev: explicitly disabled via build config 00:02:08.898 test-cmdline: explicitly disabled via build config 00:02:08.898 test-compress-perf: explicitly disabled via build config 00:02:08.898 test-crypto-perf: explicitly disabled via build config 00:02:08.898 test-dma-perf: explicitly disabled via build config 00:02:08.898 test-eventdev: explicitly disabled via build config 00:02:08.898 test-fib: explicitly disabled via build config 00:02:08.898 test-flow-perf: explicitly disabled via build config 00:02:08.898 test-gpudev: explicitly disabled via build config 00:02:08.898 test-mldev: explicitly disabled via build config 00:02:08.898 test-pipeline: explicitly disabled via build config 00:02:08.898 test-pmd: explicitly disabled via build config 00:02:08.898 test-regex: explicitly disabled via build config 00:02:08.898 test-sad: explicitly disabled via build config 00:02:08.898 test-security-perf: explicitly disabled via build config 00:02:08.898 00:02:08.898 libs: 00:02:08.898 argparse: explicitly disabled via build config 00:02:08.898 metrics: explicitly disabled via build config 00:02:08.898 acl: explicitly disabled via build config 00:02:08.898 bbdev: explicitly disabled via build config 00:02:08.898 bitratestats: explicitly disabled via build config 00:02:08.898 bpf: explicitly disabled via build config 00:02:08.898 cfgfile: explicitly disabled via build config 00:02:08.898 distributor: explicitly disabled via build config 00:02:08.898 efd: explicitly disabled via build config 00:02:08.898 eventdev: explicitly disabled via build config 00:02:08.898 dispatcher: explicitly disabled via build config 00:02:08.898 gpudev: explicitly disabled via build config 00:02:08.898 gro: explicitly disabled via build config 00:02:08.898 gso: explicitly disabled via build config 00:02:08.898 ip_frag: explicitly disabled via build config 00:02:08.898 jobstats: explicitly disabled via build config 00:02:08.898 latencystats: explicitly disabled via build config 00:02:08.898 lpm: explicitly disabled via build config 00:02:08.898 member: explicitly disabled via build config 00:02:08.898 pcapng: explicitly disabled via build config 00:02:08.898 rawdev: explicitly disabled via build config 00:02:08.898 regexdev: explicitly disabled via build config 00:02:08.898 mldev: explicitly disabled via build config 00:02:08.898 rib: explicitly disabled via build config 00:02:08.898 sched: explicitly disabled via build config 00:02:08.898 stack: explicitly disabled via build config 00:02:08.898 ipsec: explicitly disabled via build config 00:02:08.898 pdcp: explicitly disabled via build config 00:02:08.898 fib: explicitly disabled via build config 00:02:08.898 port: explicitly disabled via build config 00:02:08.898 pdump: explicitly disabled via build config 00:02:08.898 table: explicitly disabled via build config 00:02:08.898 pipeline: explicitly disabled via build config 00:02:08.898 graph: explicitly disabled via build config 00:02:08.898 node: explicitly disabled via build config 00:02:08.898 00:02:08.898 drivers: 00:02:08.898 common/cpt: not in enabled drivers build config 00:02:08.898 common/dpaax: not in enabled drivers build config 00:02:08.898 common/iavf: not in enabled drivers build config 00:02:08.898 common/idpf: not in enabled drivers build config 00:02:08.898 common/ionic: not in enabled drivers build config 00:02:08.898 common/mvep: not in enabled drivers build config 00:02:08.898 common/octeontx: not in enabled drivers build config 00:02:08.898 bus/auxiliary: not in enabled drivers build config 00:02:08.898 bus/cdx: not in enabled drivers build config 00:02:08.898 bus/dpaa: not in enabled drivers build config 00:02:08.898 bus/fslmc: not in enabled drivers build config 00:02:08.898 bus/ifpga: not in enabled drivers build config 00:02:08.898 bus/platform: not in enabled drivers build config 00:02:08.898 bus/uacce: not in enabled drivers build config 00:02:08.898 bus/vmbus: not in enabled drivers build config 00:02:08.898 common/cnxk: not in enabled drivers build config 00:02:08.898 common/mlx5: not in enabled drivers build config 00:02:08.898 common/nfp: not in enabled drivers build config 00:02:08.898 common/nitrox: not in enabled drivers build config 00:02:08.899 common/qat: not in enabled drivers build config 00:02:08.899 common/sfc_efx: not in enabled drivers build config 00:02:08.899 mempool/bucket: not in enabled drivers build config 00:02:08.899 mempool/cnxk: not in enabled drivers build config 00:02:08.899 mempool/dpaa: not in enabled drivers build config 00:02:08.899 mempool/dpaa2: not in enabled drivers build config 00:02:08.899 mempool/octeontx: not in enabled drivers build config 00:02:08.899 mempool/stack: not in enabled drivers build config 00:02:08.899 dma/cnxk: not in enabled drivers build config 00:02:08.899 dma/dpaa: not in enabled drivers build config 00:02:08.899 dma/dpaa2: not in enabled drivers build config 00:02:08.899 dma/hisilicon: not in enabled drivers build config 00:02:08.899 dma/idxd: not in enabled drivers build config 00:02:08.899 dma/ioat: not in enabled drivers build config 00:02:08.899 dma/skeleton: not in enabled drivers build config 00:02:08.899 net/af_packet: not in enabled drivers build config 00:02:08.899 net/af_xdp: not in enabled drivers build config 00:02:08.899 net/ark: not in enabled drivers build config 00:02:08.899 net/atlantic: not in enabled drivers build config 00:02:08.899 net/avp: not in enabled drivers build config 00:02:08.899 net/axgbe: not in enabled drivers build config 00:02:08.899 net/bnx2x: not in enabled drivers build config 00:02:08.899 net/bnxt: not in enabled drivers build config 00:02:08.899 net/bonding: not in enabled drivers build config 00:02:08.899 net/cnxk: not in enabled drivers build config 00:02:08.899 net/cpfl: not in enabled drivers build config 00:02:08.899 net/cxgbe: not in enabled drivers build config 00:02:08.899 net/dpaa: not in enabled drivers build config 00:02:08.899 net/dpaa2: not in enabled drivers build config 00:02:08.899 net/e1000: not in enabled drivers build config 00:02:08.899 net/ena: not in enabled drivers build config 00:02:08.899 net/enetc: not in enabled drivers build config 00:02:08.899 net/enetfec: not in enabled drivers build config 00:02:08.899 net/enic: not in enabled drivers build config 00:02:08.899 net/failsafe: not in enabled drivers build config 00:02:08.899 net/fm10k: not in enabled drivers build config 00:02:08.899 net/gve: not in enabled drivers build config 00:02:08.899 net/hinic: not in enabled drivers build config 00:02:08.899 net/hns3: not in enabled drivers build config 00:02:08.899 net/i40e: not in enabled drivers build config 00:02:08.899 net/iavf: not in enabled drivers build config 00:02:08.899 net/ice: not in enabled drivers build config 00:02:08.899 net/idpf: not in enabled drivers build config 00:02:08.899 net/igc: not in enabled drivers build config 00:02:08.899 net/ionic: not in enabled drivers build config 00:02:08.899 net/ipn3ke: not in enabled drivers build config 00:02:08.899 net/ixgbe: not in enabled drivers build config 00:02:08.899 net/mana: not in enabled drivers build config 00:02:08.899 net/memif: not in enabled drivers build config 00:02:08.899 net/mlx4: not in enabled drivers build config 00:02:08.899 net/mlx5: not in enabled drivers build config 00:02:08.899 net/mvneta: not in enabled drivers build config 00:02:08.899 net/mvpp2: not in enabled drivers build config 00:02:08.899 net/netvsc: not in enabled drivers build config 00:02:08.899 net/nfb: not in enabled drivers build config 00:02:08.899 net/nfp: not in enabled drivers build config 00:02:08.899 net/ngbe: not in enabled drivers build config 00:02:08.899 net/null: not in enabled drivers build config 00:02:08.899 net/octeontx: not in enabled drivers build config 00:02:08.899 net/octeon_ep: not in enabled drivers build config 00:02:08.899 net/pcap: not in enabled drivers build config 00:02:08.899 net/pfe: not in enabled drivers build config 00:02:08.899 net/qede: not in enabled drivers build config 00:02:08.899 net/ring: not in enabled drivers build config 00:02:08.899 net/sfc: not in enabled drivers build config 00:02:08.899 net/softnic: not in enabled drivers build config 00:02:08.899 net/tap: not in enabled drivers build config 00:02:08.899 net/thunderx: not in enabled drivers build config 00:02:08.899 net/txgbe: not in enabled drivers build config 00:02:08.899 net/vdev_netvsc: not in enabled drivers build config 00:02:08.899 net/vhost: not in enabled drivers build config 00:02:08.899 net/virtio: not in enabled drivers build config 00:02:08.899 net/vmxnet3: not in enabled drivers build config 00:02:08.899 raw/*: missing internal dependency, "rawdev" 00:02:08.899 crypto/armv8: not in enabled drivers build config 00:02:08.899 crypto/bcmfs: not in enabled drivers build config 00:02:08.899 crypto/caam_jr: not in enabled drivers build config 00:02:08.899 crypto/ccp: not in enabled drivers build config 00:02:08.899 crypto/cnxk: not in enabled drivers build config 00:02:08.899 crypto/dpaa_sec: not in enabled drivers build config 00:02:08.899 crypto/dpaa2_sec: not in enabled drivers build config 00:02:08.899 crypto/ipsec_mb: not in enabled drivers build config 00:02:08.899 crypto/mlx5: not in enabled drivers build config 00:02:08.899 crypto/mvsam: not in enabled drivers build config 00:02:08.899 crypto/nitrox: not in enabled drivers build config 00:02:08.899 crypto/null: not in enabled drivers build config 00:02:08.899 crypto/octeontx: not in enabled drivers build config 00:02:08.899 crypto/openssl: not in enabled drivers build config 00:02:08.899 crypto/scheduler: not in enabled drivers build config 00:02:08.899 crypto/uadk: not in enabled drivers build config 00:02:08.899 crypto/virtio: not in enabled drivers build config 00:02:08.899 compress/isal: not in enabled drivers build config 00:02:08.899 compress/mlx5: not in enabled drivers build config 00:02:08.899 compress/nitrox: not in enabled drivers build config 00:02:08.899 compress/octeontx: not in enabled drivers build config 00:02:08.899 compress/zlib: not in enabled drivers build config 00:02:08.899 regex/*: missing internal dependency, "regexdev" 00:02:08.899 ml/*: missing internal dependency, "mldev" 00:02:08.899 vdpa/ifc: not in enabled drivers build config 00:02:08.899 vdpa/mlx5: not in enabled drivers build config 00:02:08.899 vdpa/nfp: not in enabled drivers build config 00:02:08.899 vdpa/sfc: not in enabled drivers build config 00:02:08.899 event/*: missing internal dependency, "eventdev" 00:02:08.899 baseband/*: missing internal dependency, "bbdev" 00:02:08.899 gpu/*: missing internal dependency, "gpudev" 00:02:08.899 00:02:08.899 00:02:08.899 Build targets in project: 85 00:02:08.899 00:02:08.899 DPDK 24.03.0 00:02:08.899 00:02:08.899 User defined options 00:02:08.899 buildtype : debug 00:02:08.899 default_library : shared 00:02:08.899 libdir : lib 00:02:08.899 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:02:08.899 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:08.899 c_link_args : 00:02:08.899 cpu_instruction_set: native 00:02:08.899 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:02:08.899 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:02:08.899 enable_docs : false 00:02:08.899 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:08.899 enable_kmods : false 00:02:08.899 max_lcores : 128 00:02:08.899 tests : false 00:02:08.899 00:02:08.899 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:08.899 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:02:08.899 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:08.899 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:08.899 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:08.899 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:08.899 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:08.899 [6/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:08.899 [7/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:08.899 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:08.899 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:08.899 [10/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:08.899 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:08.899 [12/268] Linking static target lib/librte_kvargs.a 00:02:08.899 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:08.899 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:08.899 [15/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:08.899 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:08.899 [17/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:08.899 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:08.899 [19/268] Linking static target lib/librte_log.a 00:02:08.899 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:08.899 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:08.899 [22/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:08.899 [23/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:08.899 [24/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:08.900 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:08.900 [26/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:08.900 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:08.900 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:08.900 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:08.900 [30/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:08.900 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:08.900 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:08.900 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:08.900 [34/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:08.900 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:08.900 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:08.900 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:08.900 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:08.900 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:08.900 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:08.900 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:08.900 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:08.900 [43/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:08.900 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:08.900 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:08.900 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:08.900 [47/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:08.900 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:08.900 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:08.900 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:08.900 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:08.900 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:08.900 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:08.900 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:08.900 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:08.900 [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:08.900 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:08.900 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:08.900 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:08.900 [60/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:08.900 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:08.900 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:08.900 [63/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:08.900 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:08.900 [65/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:08.900 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:08.900 [67/268] Linking static target lib/librte_telemetry.a 00:02:08.900 [68/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:08.900 [69/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.900 [70/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:08.900 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:08.900 [72/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:08.900 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:08.900 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:08.900 [75/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:08.900 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:08.900 [77/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:08.900 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:08.900 [79/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:08.900 [80/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:08.900 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:08.900 [82/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:08.900 [83/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:08.900 [84/268] Linking static target lib/librte_pci.a 00:02:08.900 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:08.900 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:08.900 [87/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:08.900 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:08.900 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:08.900 [90/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:08.900 [91/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:08.900 [92/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:08.900 [93/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:08.900 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:08.900 [95/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:08.900 [96/268] Linking static target lib/librte_ring.a 00:02:08.900 [97/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:08.900 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:08.900 [99/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:08.900 [100/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:08.900 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:08.900 [102/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:08.900 [103/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:08.900 [104/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:08.900 [105/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:08.900 [106/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:08.900 [107/268] Linking static target lib/librte_rcu.a 00:02:08.900 [108/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:08.900 [109/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:08.900 [110/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:08.900 [111/268] Linking static target lib/librte_mempool.a 00:02:08.900 [112/268] Linking static target lib/librte_eal.a 00:02:08.900 [113/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:08.900 [114/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:09.160 [115/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:09.160 [116/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:09.160 [117/268] Linking static target lib/librte_meter.a 00:02:09.160 [118/268] Linking static target lib/librte_net.a 00:02:09.160 [119/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:09.160 [120/268] Linking static target lib/librte_mbuf.a 00:02:09.160 [121/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.160 [122/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:09.160 [123/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:09.160 [124/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:09.160 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:09.160 [126/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:09.160 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:09.161 [128/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.161 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:09.161 [130/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:09.161 [131/268] Linking static target lib/librte_timer.a 00:02:09.161 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:09.161 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:09.161 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:09.161 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:09.161 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:09.161 [137/268] Linking static target lib/librte_cmdline.a 00:02:09.161 [138/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.161 [139/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:09.161 [140/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:09.161 [141/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:09.161 [142/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:09.161 [143/268] Linking target lib/librte_log.so.24.1 00:02:09.161 [144/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:09.161 [145/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:09.161 [146/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:09.161 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:09.418 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:09.418 [149/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:09.418 [150/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:09.418 [151/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:09.418 [152/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.418 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:09.418 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:09.418 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:09.418 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:09.418 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:09.418 [158/268] Linking static target lib/librte_compressdev.a 00:02:09.418 [159/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:09.418 [160/268] Linking static target lib/librte_dmadev.a 00:02:09.418 [161/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.418 [162/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:09.418 [163/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:09.418 [164/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:09.418 [165/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.418 [166/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.418 [167/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:09.418 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:09.418 [169/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:09.418 [170/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:09.418 [171/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:09.418 [172/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:09.419 [173/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:09.419 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:09.419 [175/268] Linking static target lib/librte_power.a 00:02:09.419 [176/268] Linking static target lib/librte_reorder.a 00:02:09.419 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:09.419 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:09.419 [179/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:09.419 [180/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:09.419 [181/268] Linking static target lib/librte_security.a 00:02:09.419 [182/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:09.419 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:09.419 [184/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:09.419 [185/268] Linking target lib/librte_kvargs.so.24.1 00:02:09.419 [186/268] Linking target lib/librte_telemetry.so.24.1 00:02:09.419 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:09.419 [188/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:09.680 [189/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:09.680 [190/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:09.680 [191/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:09.680 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:09.680 [193/268] Linking static target lib/librte_hash.a 00:02:09.680 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:09.680 [195/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:09.680 [196/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:09.680 [197/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:09.680 [198/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:09.680 [199/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.680 [200/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.680 [201/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:09.680 [202/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:09.680 [203/268] Linking static target drivers/librte_bus_vdev.a 00:02:09.680 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:09.680 [205/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:09.680 [206/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:09.680 [207/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:09.680 [208/268] Linking static target drivers/librte_bus_pci.a 00:02:09.680 [209/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:09.680 [210/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:09.680 [211/268] Linking static target lib/librte_cryptodev.a 00:02:09.680 [212/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:09.680 [213/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:09.680 [214/268] Linking static target drivers/librte_mempool_ring.a 00:02:09.939 [215/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.939 [216/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.939 [217/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.939 [218/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.939 [219/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.197 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.197 [221/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:10.197 [222/268] Linking static target lib/librte_ethdev.a 00:02:10.197 [223/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:10.456 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.456 [225/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.715 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.715 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.282 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:11.282 [229/268] Linking static target lib/librte_vhost.a 00:02:12.220 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.602 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.175 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.552 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.552 [234/268] Linking target lib/librte_eal.so.24.1 00:02:21.811 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:21.811 [236/268] Linking target lib/librte_ring.so.24.1 00:02:21.811 [237/268] Linking target lib/librte_meter.so.24.1 00:02:21.811 [238/268] Linking target lib/librte_timer.so.24.1 00:02:21.811 [239/268] Linking target lib/librte_pci.so.24.1 00:02:21.811 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:21.811 [241/268] Linking target lib/librte_dmadev.so.24.1 00:02:21.811 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:21.811 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:21.811 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:22.070 [245/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:22.070 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:22.070 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:22.070 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:22.070 [249/268] Linking target lib/librte_rcu.so.24.1 00:02:22.070 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:22.070 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:22.070 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:22.070 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:22.329 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:22.329 [255/268] Linking target lib/librte_net.so.24.1 00:02:22.329 [256/268] Linking target lib/librte_reorder.so.24.1 00:02:22.329 [257/268] Linking target lib/librte_compressdev.so.24.1 00:02:22.329 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:22.586 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:22.587 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:22.587 [261/268] Linking target lib/librte_hash.so.24.1 00:02:22.587 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:22.587 [263/268] Linking target lib/librte_ethdev.so.24.1 00:02:22.587 [264/268] Linking target lib/librte_security.so.24.1 00:02:22.587 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:22.587 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:22.844 [267/268] Linking target lib/librte_power.so.24.1 00:02:22.844 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:22.844 INFO: autodetecting backend as ninja 00:02:22.844 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 72 00:02:32.827 CC lib/ut_mock/mock.o 00:02:32.827 CC lib/log/log.o 00:02:32.827 CC lib/log/log_flags.o 00:02:32.827 CC lib/log/log_deprecated.o 00:02:32.827 CC lib/ut/ut.o 00:02:32.827 LIB libspdk_ut_mock.a 00:02:32.827 LIB libspdk_ut.a 00:02:32.827 LIB libspdk_log.a 00:02:32.827 SO libspdk_ut.so.2.0 00:02:32.827 SO libspdk_ut_mock.so.6.0 00:02:32.827 SO libspdk_log.so.7.0 00:02:32.827 SYMLINK libspdk_ut.so 00:02:32.827 SYMLINK libspdk_ut_mock.so 00:02:32.827 SYMLINK libspdk_log.so 00:02:32.827 CXX lib/trace_parser/trace.o 00:02:32.827 CC lib/ioat/ioat.o 00:02:32.827 CC lib/dma/dma.o 00:02:32.827 CC lib/util/base64.o 00:02:32.827 CC lib/util/bit_array.o 00:02:32.827 CC lib/util/cpuset.o 00:02:32.827 CC lib/util/crc16.o 00:02:32.827 CC lib/util/crc32.o 00:02:32.827 CC lib/util/crc32c.o 00:02:32.827 CC lib/util/crc32_ieee.o 00:02:32.827 CC lib/util/crc64.o 00:02:32.827 CC lib/util/dif.o 00:02:32.827 CC lib/util/fd.o 00:02:32.827 CC lib/util/fd_group.o 00:02:32.827 CC lib/util/file.o 00:02:32.827 CC lib/util/hexlify.o 00:02:32.827 CC lib/util/iov.o 00:02:32.827 CC lib/util/math.o 00:02:32.827 CC lib/util/net.o 00:02:32.827 CC lib/util/pipe.o 00:02:32.827 CC lib/util/strerror_tls.o 00:02:32.827 CC lib/util/string.o 00:02:32.827 CC lib/util/uuid.o 00:02:32.827 CC lib/util/xor.o 00:02:32.827 CC lib/util/zipf.o 00:02:32.827 CC lib/util/md5.o 00:02:32.827 CC lib/vfio_user/host/vfio_user_pci.o 00:02:32.827 CC lib/vfio_user/host/vfio_user.o 00:02:32.827 LIB libspdk_dma.a 00:02:32.827 SO libspdk_dma.so.5.0 00:02:32.827 LIB libspdk_ioat.a 00:02:32.827 SYMLINK libspdk_dma.so 00:02:32.827 SO libspdk_ioat.so.7.0 00:02:32.827 SYMLINK libspdk_ioat.so 00:02:32.827 LIB libspdk_vfio_user.a 00:02:32.827 SO libspdk_vfio_user.so.5.0 00:02:32.827 LIB libspdk_util.a 00:02:32.827 SYMLINK libspdk_vfio_user.so 00:02:32.827 SO libspdk_util.so.10.1 00:02:32.827 SYMLINK libspdk_util.so 00:02:32.827 LIB libspdk_trace_parser.a 00:02:32.827 SO libspdk_trace_parser.so.6.0 00:02:32.827 SYMLINK libspdk_trace_parser.so 00:02:32.827 CC lib/idxd/idxd.o 00:02:32.827 CC lib/idxd/idxd_kernel.o 00:02:32.827 CC lib/idxd/idxd_user.o 00:02:32.827 CC lib/env_dpdk/env.o 00:02:32.827 CC lib/conf/conf.o 00:02:32.827 CC lib/env_dpdk/memory.o 00:02:32.827 CC lib/env_dpdk/init.o 00:02:32.827 CC lib/env_dpdk/pci.o 00:02:32.827 CC lib/rdma_utils/rdma_utils.o 00:02:33.086 CC lib/json/json_parse.o 00:02:33.086 CC lib/json/json_util.o 00:02:33.086 CC lib/env_dpdk/threads.o 00:02:33.086 CC lib/json/json_write.o 00:02:33.086 CC lib/env_dpdk/pci_ioat.o 00:02:33.086 CC lib/env_dpdk/pci_virtio.o 00:02:33.086 CC lib/env_dpdk/pci_vmd.o 00:02:33.086 CC lib/env_dpdk/pci_idxd.o 00:02:33.086 CC lib/env_dpdk/pci_event.o 00:02:33.086 CC lib/env_dpdk/sigbus_handler.o 00:02:33.086 CC lib/env_dpdk/pci_dpdk.o 00:02:33.086 CC lib/rdma_provider/common.o 00:02:33.086 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:33.086 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:33.086 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:33.086 CC lib/vmd/vmd.o 00:02:33.086 CC lib/vmd/led.o 00:02:33.086 LIB libspdk_rdma_provider.a 00:02:33.086 LIB libspdk_conf.a 00:02:33.086 SO libspdk_rdma_provider.so.6.0 00:02:33.345 SO libspdk_conf.so.6.0 00:02:33.345 LIB libspdk_rdma_utils.a 00:02:33.345 LIB libspdk_json.a 00:02:33.345 SYMLINK libspdk_rdma_provider.so 00:02:33.345 SO libspdk_rdma_utils.so.1.0 00:02:33.345 SYMLINK libspdk_conf.so 00:02:33.345 SO libspdk_json.so.6.0 00:02:33.345 SYMLINK libspdk_rdma_utils.so 00:02:33.345 SYMLINK libspdk_json.so 00:02:33.345 LIB libspdk_idxd.a 00:02:33.605 SO libspdk_idxd.so.12.1 00:02:33.605 LIB libspdk_vmd.a 00:02:33.605 SO libspdk_vmd.so.6.0 00:02:33.605 SYMLINK libspdk_idxd.so 00:02:33.605 SYMLINK libspdk_vmd.so 00:02:33.605 CC lib/jsonrpc/jsonrpc_server.o 00:02:33.605 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:33.605 CC lib/jsonrpc/jsonrpc_client.o 00:02:33.605 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:33.864 LIB libspdk_jsonrpc.a 00:02:34.123 SO libspdk_jsonrpc.so.6.0 00:02:34.123 LIB libspdk_env_dpdk.a 00:02:34.123 SYMLINK libspdk_jsonrpc.so 00:02:34.123 SO libspdk_env_dpdk.so.15.1 00:02:34.123 SYMLINK libspdk_env_dpdk.so 00:02:34.382 CC lib/rpc/rpc.o 00:02:34.640 LIB libspdk_rpc.a 00:02:34.640 SO libspdk_rpc.so.6.0 00:02:34.640 SYMLINK libspdk_rpc.so 00:02:35.208 CC lib/notify/notify.o 00:02:35.208 CC lib/notify/notify_rpc.o 00:02:35.208 CC lib/trace/trace.o 00:02:35.208 CC lib/trace/trace_flags.o 00:02:35.208 CC lib/trace/trace_rpc.o 00:02:35.208 CC lib/keyring/keyring.o 00:02:35.208 CC lib/keyring/keyring_rpc.o 00:02:35.208 LIB libspdk_notify.a 00:02:35.208 SO libspdk_notify.so.6.0 00:02:35.208 LIB libspdk_trace.a 00:02:35.208 LIB libspdk_keyring.a 00:02:35.466 SO libspdk_trace.so.11.0 00:02:35.466 SYMLINK libspdk_notify.so 00:02:35.466 SO libspdk_keyring.so.2.0 00:02:35.466 SYMLINK libspdk_trace.so 00:02:35.466 SYMLINK libspdk_keyring.so 00:02:35.724 CC lib/sock/sock.o 00:02:35.724 CC lib/sock/sock_rpc.o 00:02:35.724 CC lib/thread/thread.o 00:02:35.724 CC lib/thread/iobuf.o 00:02:36.291 LIB libspdk_sock.a 00:02:36.291 SO libspdk_sock.so.10.0 00:02:36.291 SYMLINK libspdk_sock.so 00:02:36.550 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:36.550 CC lib/nvme/nvme_ctrlr.o 00:02:36.550 CC lib/nvme/nvme_fabric.o 00:02:36.550 CC lib/nvme/nvme_ns_cmd.o 00:02:36.550 CC lib/nvme/nvme_ns.o 00:02:36.550 CC lib/nvme/nvme_pcie_common.o 00:02:36.550 CC lib/nvme/nvme_pcie.o 00:02:36.550 CC lib/nvme/nvme_qpair.o 00:02:36.550 CC lib/nvme/nvme.o 00:02:36.550 CC lib/nvme/nvme_quirks.o 00:02:36.550 CC lib/nvme/nvme_transport.o 00:02:36.550 CC lib/nvme/nvme_discovery.o 00:02:36.550 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:36.550 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:36.550 CC lib/nvme/nvme_tcp.o 00:02:36.550 CC lib/nvme/nvme_opal.o 00:02:36.550 CC lib/nvme/nvme_io_msg.o 00:02:36.550 CC lib/nvme/nvme_poll_group.o 00:02:36.550 CC lib/nvme/nvme_zns.o 00:02:36.550 CC lib/nvme/nvme_stubs.o 00:02:36.550 CC lib/nvme/nvme_auth.o 00:02:36.550 CC lib/nvme/nvme_cuse.o 00:02:36.550 CC lib/nvme/nvme_rdma.o 00:02:37.115 LIB libspdk_thread.a 00:02:37.115 SO libspdk_thread.so.10.2 00:02:37.115 SYMLINK libspdk_thread.so 00:02:37.374 CC lib/accel/accel.o 00:02:37.374 CC lib/accel/accel_sw.o 00:02:37.374 CC lib/accel/accel_rpc.o 00:02:37.374 CC lib/virtio/virtio_vfio_user.o 00:02:37.374 CC lib/virtio/virtio.o 00:02:37.374 CC lib/virtio/virtio_vhost_user.o 00:02:37.374 CC lib/virtio/virtio_pci.o 00:02:37.374 CC lib/blob/blobstore.o 00:02:37.374 CC lib/blob/request.o 00:02:37.374 CC lib/blob/zeroes.o 00:02:37.374 CC lib/blob/blob_bs_dev.o 00:02:37.374 CC lib/fsdev/fsdev_io.o 00:02:37.374 CC lib/fsdev/fsdev.o 00:02:37.374 CC lib/init/json_config.o 00:02:37.374 CC lib/fsdev/fsdev_rpc.o 00:02:37.374 CC lib/init/subsystem.o 00:02:37.374 CC lib/init/subsystem_rpc.o 00:02:37.374 CC lib/init/rpc.o 00:02:37.632 LIB libspdk_init.a 00:02:37.632 SO libspdk_init.so.6.0 00:02:37.632 LIB libspdk_virtio.a 00:02:37.632 SO libspdk_virtio.so.7.0 00:02:37.632 SYMLINK libspdk_init.so 00:02:37.891 SYMLINK libspdk_virtio.so 00:02:37.891 LIB libspdk_fsdev.a 00:02:37.891 SO libspdk_fsdev.so.1.0 00:02:38.150 CC lib/event/app.o 00:02:38.150 CC lib/event/reactor.o 00:02:38.150 SYMLINK libspdk_fsdev.so 00:02:38.150 CC lib/event/log_rpc.o 00:02:38.150 CC lib/event/app_rpc.o 00:02:38.150 CC lib/event/scheduler_static.o 00:02:38.150 LIB libspdk_accel.a 00:02:38.150 SO libspdk_accel.so.16.0 00:02:38.409 LIB libspdk_nvme.a 00:02:38.409 SYMLINK libspdk_accel.so 00:02:38.409 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:38.409 LIB libspdk_event.a 00:02:38.409 SO libspdk_nvme.so.15.0 00:02:38.409 SO libspdk_event.so.15.0 00:02:38.667 SYMLINK libspdk_event.so 00:02:38.667 SYMLINK libspdk_nvme.so 00:02:38.667 CC lib/bdev/bdev.o 00:02:38.667 CC lib/bdev/bdev_rpc.o 00:02:38.667 CC lib/bdev/bdev_zone.o 00:02:38.667 CC lib/bdev/part.o 00:02:38.667 CC lib/bdev/scsi_nvme.o 00:02:38.925 LIB libspdk_fuse_dispatcher.a 00:02:38.925 SO libspdk_fuse_dispatcher.so.1.0 00:02:38.925 SYMLINK libspdk_fuse_dispatcher.so 00:02:39.862 LIB libspdk_blob.a 00:02:39.862 SO libspdk_blob.so.11.0 00:02:39.862 SYMLINK libspdk_blob.so 00:02:40.120 CC lib/blobfs/blobfs.o 00:02:40.120 CC lib/lvol/lvol.o 00:02:40.120 CC lib/blobfs/tree.o 00:02:40.686 LIB libspdk_bdev.a 00:02:40.686 SO libspdk_bdev.so.17.0 00:02:40.686 SYMLINK libspdk_bdev.so 00:02:40.686 LIB libspdk_blobfs.a 00:02:40.944 SO libspdk_blobfs.so.10.0 00:02:40.944 LIB libspdk_lvol.a 00:02:40.944 SO libspdk_lvol.so.10.0 00:02:40.944 SYMLINK libspdk_blobfs.so 00:02:40.944 SYMLINK libspdk_lvol.so 00:02:40.944 CC lib/ftl/ftl_core.o 00:02:40.944 CC lib/ftl/ftl_init.o 00:02:40.944 CC lib/ftl/ftl_layout.o 00:02:40.944 CC lib/ftl/ftl_debug.o 00:02:40.944 CC lib/ftl/ftl_io.o 00:02:40.944 CC lib/ftl/ftl_l2p.o 00:02:40.944 CC lib/ftl/ftl_sb.o 00:02:40.944 CC lib/ftl/ftl_l2p_flat.o 00:02:40.944 CC lib/ftl/ftl_nv_cache.o 00:02:40.944 CC lib/ftl/ftl_band.o 00:02:40.944 CC lib/ftl/ftl_band_ops.o 00:02:40.944 CC lib/ftl/ftl_writer.o 00:02:40.944 CC lib/ftl/ftl_rq.o 00:02:40.944 CC lib/ftl/ftl_l2p_cache.o 00:02:40.944 CC lib/ftl/ftl_reloc.o 00:02:41.211 CC lib/ftl/ftl_p2l.o 00:02:41.211 CC lib/ftl/ftl_p2l_log.o 00:02:41.211 CC lib/ftl/mngt/ftl_mngt.o 00:02:41.211 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:41.211 CC lib/nvmf/ctrlr.o 00:02:41.211 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:41.211 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:41.211 CC lib/nbd/nbd.o 00:02:41.211 CC lib/nvmf/ctrlr_discovery.o 00:02:41.211 CC lib/scsi/dev.o 00:02:41.211 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:41.211 CC lib/scsi/lun.o 00:02:41.211 CC lib/nvmf/ctrlr_bdev.o 00:02:41.211 CC lib/nbd/nbd_rpc.o 00:02:41.211 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:41.211 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:41.211 CC lib/nvmf/nvmf.o 00:02:41.211 CC lib/scsi/port.o 00:02:41.211 CC lib/nvmf/subsystem.o 00:02:41.211 CC lib/nvmf/nvmf_rpc.o 00:02:41.211 CC lib/scsi/scsi.o 00:02:41.211 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:41.211 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:41.211 CC lib/scsi/scsi_bdev.o 00:02:41.211 CC lib/scsi/scsi_rpc.o 00:02:41.211 CC lib/scsi/scsi_pr.o 00:02:41.211 CC lib/nvmf/tcp.o 00:02:41.211 CC lib/nvmf/transport.o 00:02:41.211 CC lib/nvmf/stubs.o 00:02:41.211 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:41.211 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:41.211 CC lib/nvmf/mdns_server.o 00:02:41.211 CC lib/scsi/task.o 00:02:41.211 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:41.211 CC lib/nvmf/rdma.o 00:02:41.211 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:41.211 CC lib/ftl/utils/ftl_conf.o 00:02:41.211 CC lib/ublk/ublk.o 00:02:41.211 CC lib/nvmf/auth.o 00:02:41.211 CC lib/ftl/utils/ftl_md.o 00:02:41.211 CC lib/ftl/utils/ftl_mempool.o 00:02:41.211 CC lib/ublk/ublk_rpc.o 00:02:41.211 CC lib/ftl/utils/ftl_property.o 00:02:41.211 CC lib/ftl/utils/ftl_bitmap.o 00:02:41.211 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:41.211 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:41.211 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:41.211 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:41.211 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:41.211 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:41.211 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:41.211 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:41.211 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:41.211 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:41.211 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:41.211 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:41.211 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:41.470 CC lib/ftl/base/ftl_base_dev.o 00:02:41.470 CC lib/ftl/base/ftl_base_bdev.o 00:02:41.470 CC lib/ftl/ftl_trace.o 00:02:41.729 LIB libspdk_nbd.a 00:02:41.729 SO libspdk_nbd.so.7.0 00:02:41.729 SYMLINK libspdk_nbd.so 00:02:41.729 LIB libspdk_scsi.a 00:02:41.988 SO libspdk_scsi.so.9.0 00:02:41.988 LIB libspdk_ublk.a 00:02:41.988 SYMLINK libspdk_scsi.so 00:02:41.988 SO libspdk_ublk.so.3.0 00:02:41.988 SYMLINK libspdk_ublk.so 00:02:42.247 LIB libspdk_ftl.a 00:02:42.247 CC lib/iscsi/conn.o 00:02:42.247 CC lib/iscsi/init_grp.o 00:02:42.247 CC lib/vhost/vhost.o 00:02:42.247 CC lib/iscsi/iscsi.o 00:02:42.247 CC lib/iscsi/param.o 00:02:42.247 CC lib/vhost/vhost_rpc.o 00:02:42.247 CC lib/iscsi/portal_grp.o 00:02:42.247 CC lib/vhost/vhost_scsi.o 00:02:42.247 CC lib/vhost/vhost_blk.o 00:02:42.247 CC lib/iscsi/tgt_node.o 00:02:42.247 CC lib/vhost/rte_vhost_user.o 00:02:42.247 CC lib/iscsi/iscsi_subsystem.o 00:02:42.247 CC lib/iscsi/iscsi_rpc.o 00:02:42.247 CC lib/iscsi/task.o 00:02:42.247 SO libspdk_ftl.so.9.0 00:02:42.506 SYMLINK libspdk_ftl.so 00:02:43.073 LIB libspdk_nvmf.a 00:02:43.073 SO libspdk_nvmf.so.19.0 00:02:43.073 LIB libspdk_vhost.a 00:02:43.073 SO libspdk_vhost.so.8.0 00:02:43.073 SYMLINK libspdk_nvmf.so 00:02:43.332 SYMLINK libspdk_vhost.so 00:02:43.332 LIB libspdk_iscsi.a 00:02:43.332 SO libspdk_iscsi.so.8.0 00:02:43.591 SYMLINK libspdk_iscsi.so 00:02:44.160 CC module/env_dpdk/env_dpdk_rpc.o 00:02:44.160 CC module/keyring/linux/keyring.o 00:02:44.160 CC module/keyring/linux/keyring_rpc.o 00:02:44.160 CC module/sock/posix/posix.o 00:02:44.160 CC module/accel/dsa/accel_dsa_rpc.o 00:02:44.160 CC module/accel/dsa/accel_dsa.o 00:02:44.160 LIB libspdk_env_dpdk_rpc.a 00:02:44.160 CC module/accel/ioat/accel_ioat.o 00:02:44.160 CC module/accel/ioat/accel_ioat_rpc.o 00:02:44.160 CC module/accel/error/accel_error.o 00:02:44.160 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:44.160 CC module/accel/error/accel_error_rpc.o 00:02:44.160 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:44.160 CC module/fsdev/aio/fsdev_aio.o 00:02:44.160 CC module/fsdev/aio/linux_aio_mgr.o 00:02:44.160 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:44.160 CC module/blob/bdev/blob_bdev.o 00:02:44.160 CC module/scheduler/gscheduler/gscheduler.o 00:02:44.160 CC module/accel/iaa/accel_iaa.o 00:02:44.160 CC module/keyring/file/keyring.o 00:02:44.160 CC module/accel/iaa/accel_iaa_rpc.o 00:02:44.160 CC module/keyring/file/keyring_rpc.o 00:02:44.418 SO libspdk_env_dpdk_rpc.so.6.0 00:02:44.418 SYMLINK libspdk_env_dpdk_rpc.so 00:02:44.418 LIB libspdk_keyring_linux.a 00:02:44.418 LIB libspdk_accel_error.a 00:02:44.418 LIB libspdk_scheduler_dpdk_governor.a 00:02:44.418 SO libspdk_keyring_linux.so.1.0 00:02:44.418 LIB libspdk_keyring_file.a 00:02:44.418 LIB libspdk_scheduler_gscheduler.a 00:02:44.418 LIB libspdk_accel_ioat.a 00:02:44.418 LIB libspdk_scheduler_dynamic.a 00:02:44.418 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:44.418 LIB libspdk_accel_iaa.a 00:02:44.418 SO libspdk_accel_error.so.2.0 00:02:44.418 SO libspdk_accel_ioat.so.6.0 00:02:44.418 SO libspdk_scheduler_dynamic.so.4.0 00:02:44.418 SO libspdk_scheduler_gscheduler.so.4.0 00:02:44.418 SO libspdk_keyring_file.so.2.0 00:02:44.418 SO libspdk_accel_iaa.so.3.0 00:02:44.418 SYMLINK libspdk_keyring_linux.so 00:02:44.418 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:44.418 LIB libspdk_accel_dsa.a 00:02:44.418 LIB libspdk_blob_bdev.a 00:02:44.676 SYMLINK libspdk_accel_error.so 00:02:44.676 SYMLINK libspdk_scheduler_dynamic.so 00:02:44.676 SYMLINK libspdk_scheduler_gscheduler.so 00:02:44.676 SYMLINK libspdk_accel_ioat.so 00:02:44.676 SO libspdk_blob_bdev.so.11.0 00:02:44.676 SYMLINK libspdk_keyring_file.so 00:02:44.676 SO libspdk_accel_dsa.so.5.0 00:02:44.676 SYMLINK libspdk_accel_iaa.so 00:02:44.676 SYMLINK libspdk_blob_bdev.so 00:02:44.676 SYMLINK libspdk_accel_dsa.so 00:02:44.935 LIB libspdk_fsdev_aio.a 00:02:44.935 SO libspdk_fsdev_aio.so.1.0 00:02:44.935 LIB libspdk_sock_posix.a 00:02:44.935 SO libspdk_sock_posix.so.6.0 00:02:44.935 SYMLINK libspdk_fsdev_aio.so 00:02:44.935 SYMLINK libspdk_sock_posix.so 00:02:45.193 CC module/bdev/gpt/vbdev_gpt.o 00:02:45.193 CC module/bdev/gpt/gpt.o 00:02:45.193 CC module/bdev/malloc/bdev_malloc.o 00:02:45.193 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:45.193 CC module/bdev/error/vbdev_error.o 00:02:45.193 CC module/bdev/error/vbdev_error_rpc.o 00:02:45.193 CC module/bdev/null/bdev_null.o 00:02:45.193 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:45.193 CC module/bdev/null/bdev_null_rpc.o 00:02:45.193 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:45.193 CC module/bdev/passthru/vbdev_passthru.o 00:02:45.193 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:45.193 CC module/bdev/delay/vbdev_delay.o 00:02:45.193 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:45.193 CC module/blobfs/bdev/blobfs_bdev.o 00:02:45.193 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:45.193 CC module/bdev/ftl/bdev_ftl.o 00:02:45.193 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:45.193 CC module/bdev/nvme/bdev_nvme.o 00:02:45.193 CC module/bdev/split/vbdev_split.o 00:02:45.193 CC module/bdev/nvme/nvme_rpc.o 00:02:45.193 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:45.193 CC module/bdev/split/vbdev_split_rpc.o 00:02:45.193 CC module/bdev/nvme/bdev_mdns_client.o 00:02:45.193 CC module/bdev/nvme/vbdev_opal.o 00:02:45.193 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:45.193 CC module/bdev/raid/bdev_raid.o 00:02:45.193 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:45.193 CC module/bdev/raid/bdev_raid_sb.o 00:02:45.193 CC module/bdev/raid/raid0.o 00:02:45.193 CC module/bdev/raid/bdev_raid_rpc.o 00:02:45.193 CC module/bdev/raid/raid1.o 00:02:45.193 CC module/bdev/raid/concat.o 00:02:45.193 CC module/bdev/aio/bdev_aio.o 00:02:45.193 CC module/bdev/aio/bdev_aio_rpc.o 00:02:45.193 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:45.193 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:45.193 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:45.193 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:45.193 CC module/bdev/lvol/vbdev_lvol.o 00:02:45.193 CC module/bdev/iscsi/bdev_iscsi.o 00:02:45.193 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:45.451 LIB libspdk_blobfs_bdev.a 00:02:45.451 SO libspdk_blobfs_bdev.so.6.0 00:02:45.451 LIB libspdk_bdev_split.a 00:02:45.451 LIB libspdk_bdev_null.a 00:02:45.451 SO libspdk_bdev_split.so.6.0 00:02:45.451 SO libspdk_bdev_null.so.6.0 00:02:45.451 LIB libspdk_bdev_passthru.a 00:02:45.451 LIB libspdk_bdev_gpt.a 00:02:45.451 SYMLINK libspdk_blobfs_bdev.so 00:02:45.451 SO libspdk_bdev_passthru.so.6.0 00:02:45.451 SYMLINK libspdk_bdev_split.so 00:02:45.451 LIB libspdk_bdev_zone_block.a 00:02:45.451 SO libspdk_bdev_gpt.so.6.0 00:02:45.451 SYMLINK libspdk_bdev_null.so 00:02:45.451 LIB libspdk_bdev_delay.a 00:02:45.451 LIB libspdk_bdev_error.a 00:02:45.451 SO libspdk_bdev_zone_block.so.6.0 00:02:45.451 SO libspdk_bdev_delay.so.6.0 00:02:45.451 LIB libspdk_bdev_aio.a 00:02:45.451 SO libspdk_bdev_error.so.6.0 00:02:45.451 LIB libspdk_bdev_ftl.a 00:02:45.451 LIB libspdk_bdev_iscsi.a 00:02:45.451 SYMLINK libspdk_bdev_gpt.so 00:02:45.451 SO libspdk_bdev_aio.so.6.0 00:02:45.451 SYMLINK libspdk_bdev_passthru.so 00:02:45.711 SYMLINK libspdk_bdev_zone_block.so 00:02:45.711 SO libspdk_bdev_ftl.so.6.0 00:02:45.711 SO libspdk_bdev_iscsi.so.6.0 00:02:45.711 SYMLINK libspdk_bdev_delay.so 00:02:45.711 SYMLINK libspdk_bdev_error.so 00:02:45.711 SYMLINK libspdk_bdev_aio.so 00:02:45.711 LIB libspdk_bdev_malloc.a 00:02:45.711 LIB libspdk_bdev_lvol.a 00:02:45.711 SYMLINK libspdk_bdev_iscsi.so 00:02:45.711 SYMLINK libspdk_bdev_ftl.so 00:02:45.711 SO libspdk_bdev_malloc.so.6.0 00:02:45.711 SO libspdk_bdev_lvol.so.6.0 00:02:45.711 LIB libspdk_bdev_virtio.a 00:02:45.711 SO libspdk_bdev_virtio.so.6.0 00:02:45.711 SYMLINK libspdk_bdev_malloc.so 00:02:45.711 SYMLINK libspdk_bdev_lvol.so 00:02:45.711 SYMLINK libspdk_bdev_virtio.so 00:02:45.969 LIB libspdk_bdev_raid.a 00:02:45.969 SO libspdk_bdev_raid.so.6.0 00:02:46.227 SYMLINK libspdk_bdev_raid.so 00:02:47.164 LIB libspdk_bdev_nvme.a 00:02:47.164 SO libspdk_bdev_nvme.so.7.0 00:02:47.164 SYMLINK libspdk_bdev_nvme.so 00:02:47.780 CC module/event/subsystems/iobuf/iobuf.o 00:02:47.780 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:47.780 CC module/event/subsystems/scheduler/scheduler.o 00:02:47.780 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:47.780 CC module/event/subsystems/vmd/vmd.o 00:02:47.780 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:47.780 CC module/event/subsystems/keyring/keyring.o 00:02:47.780 CC module/event/subsystems/fsdev/fsdev.o 00:02:48.103 CC module/event/subsystems/sock/sock.o 00:02:48.103 LIB libspdk_event_iobuf.a 00:02:48.103 LIB libspdk_event_keyring.a 00:02:48.103 LIB libspdk_event_scheduler.a 00:02:48.103 LIB libspdk_event_sock.a 00:02:48.103 LIB libspdk_event_fsdev.a 00:02:48.103 LIB libspdk_event_vhost_blk.a 00:02:48.103 LIB libspdk_event_vmd.a 00:02:48.103 SO libspdk_event_keyring.so.1.0 00:02:48.103 SO libspdk_event_iobuf.so.3.0 00:02:48.103 SO libspdk_event_sock.so.5.0 00:02:48.103 SO libspdk_event_fsdev.so.1.0 00:02:48.103 SO libspdk_event_scheduler.so.4.0 00:02:48.103 SO libspdk_event_vhost_blk.so.3.0 00:02:48.103 SO libspdk_event_vmd.so.6.0 00:02:48.103 SYMLINK libspdk_event_sock.so 00:02:48.103 SYMLINK libspdk_event_scheduler.so 00:02:48.103 SYMLINK libspdk_event_keyring.so 00:02:48.103 SYMLINK libspdk_event_fsdev.so 00:02:48.103 SYMLINK libspdk_event_vhost_blk.so 00:02:48.103 SYMLINK libspdk_event_iobuf.so 00:02:48.103 SYMLINK libspdk_event_vmd.so 00:02:48.363 CC module/event/subsystems/accel/accel.o 00:02:48.622 LIB libspdk_event_accel.a 00:02:48.622 SO libspdk_event_accel.so.6.0 00:02:48.622 SYMLINK libspdk_event_accel.so 00:02:49.189 CC module/event/subsystems/bdev/bdev.o 00:02:49.189 LIB libspdk_event_bdev.a 00:02:49.189 SO libspdk_event_bdev.so.6.0 00:02:49.450 SYMLINK libspdk_event_bdev.so 00:02:49.709 CC module/event/subsystems/scsi/scsi.o 00:02:49.709 CC module/event/subsystems/ublk/ublk.o 00:02:49.709 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:49.709 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:49.709 CC module/event/subsystems/nbd/nbd.o 00:02:49.967 LIB libspdk_event_ublk.a 00:02:49.967 LIB libspdk_event_nbd.a 00:02:49.967 LIB libspdk_event_scsi.a 00:02:49.967 SO libspdk_event_ublk.so.3.0 00:02:49.967 SO libspdk_event_nbd.so.6.0 00:02:49.967 SO libspdk_event_scsi.so.6.0 00:02:49.967 LIB libspdk_event_nvmf.a 00:02:49.967 SYMLINK libspdk_event_ublk.so 00:02:49.967 SYMLINK libspdk_event_nbd.so 00:02:49.967 SO libspdk_event_nvmf.so.6.0 00:02:49.967 SYMLINK libspdk_event_scsi.so 00:02:49.967 SYMLINK libspdk_event_nvmf.so 00:02:50.537 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:50.537 CC module/event/subsystems/iscsi/iscsi.o 00:02:50.537 LIB libspdk_event_vhost_scsi.a 00:02:50.537 SO libspdk_event_vhost_scsi.so.3.0 00:02:50.537 LIB libspdk_event_iscsi.a 00:02:50.537 SO libspdk_event_iscsi.so.6.0 00:02:50.537 SYMLINK libspdk_event_vhost_scsi.so 00:02:50.796 SYMLINK libspdk_event_iscsi.so 00:02:50.796 SO libspdk.so.6.0 00:02:50.796 SYMLINK libspdk.so 00:02:51.371 CC app/trace_record/trace_record.o 00:02:51.371 CXX app/trace/trace.o 00:02:51.371 CC app/spdk_top/spdk_top.o 00:02:51.371 CC test/rpc_client/rpc_client_test.o 00:02:51.371 TEST_HEADER include/spdk/accel.h 00:02:51.371 TEST_HEADER include/spdk/accel_module.h 00:02:51.371 CC app/spdk_nvme_discover/discovery_aer.o 00:02:51.371 TEST_HEADER include/spdk/assert.h 00:02:51.371 TEST_HEADER include/spdk/barrier.h 00:02:51.371 TEST_HEADER include/spdk/base64.h 00:02:51.371 TEST_HEADER include/spdk/bdev.h 00:02:51.371 TEST_HEADER include/spdk/bdev_module.h 00:02:51.371 TEST_HEADER include/spdk/bit_array.h 00:02:51.371 TEST_HEADER include/spdk/bdev_zone.h 00:02:51.371 TEST_HEADER include/spdk/bit_pool.h 00:02:51.371 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:51.371 TEST_HEADER include/spdk/blob_bdev.h 00:02:51.371 TEST_HEADER include/spdk/blob.h 00:02:51.371 TEST_HEADER include/spdk/blobfs.h 00:02:51.371 TEST_HEADER include/spdk/config.h 00:02:51.371 TEST_HEADER include/spdk/cpuset.h 00:02:51.371 TEST_HEADER include/spdk/conf.h 00:02:51.371 TEST_HEADER include/spdk/crc16.h 00:02:51.371 TEST_HEADER include/spdk/crc32.h 00:02:51.371 CC app/spdk_nvme_perf/perf.o 00:02:51.371 TEST_HEADER include/spdk/crc64.h 00:02:51.371 TEST_HEADER include/spdk/dma.h 00:02:51.371 TEST_HEADER include/spdk/endian.h 00:02:51.371 TEST_HEADER include/spdk/dif.h 00:02:51.371 CC app/spdk_lspci/spdk_lspci.o 00:02:51.371 TEST_HEADER include/spdk/env_dpdk.h 00:02:51.371 TEST_HEADER include/spdk/event.h 00:02:51.371 TEST_HEADER include/spdk/fd_group.h 00:02:51.371 TEST_HEADER include/spdk/env.h 00:02:51.371 TEST_HEADER include/spdk/fd.h 00:02:51.371 TEST_HEADER include/spdk/file.h 00:02:51.371 TEST_HEADER include/spdk/fsdev.h 00:02:51.371 TEST_HEADER include/spdk/ftl.h 00:02:51.371 TEST_HEADER include/spdk/fsdev_module.h 00:02:51.371 CC app/spdk_nvme_identify/identify.o 00:02:51.371 TEST_HEADER include/spdk/hexlify.h 00:02:51.371 TEST_HEADER include/spdk/gpt_spec.h 00:02:51.371 TEST_HEADER include/spdk/histogram_data.h 00:02:51.371 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:51.371 TEST_HEADER include/spdk/idxd.h 00:02:51.371 TEST_HEADER include/spdk/init.h 00:02:51.371 TEST_HEADER include/spdk/idxd_spec.h 00:02:51.371 TEST_HEADER include/spdk/ioat.h 00:02:51.371 TEST_HEADER include/spdk/ioat_spec.h 00:02:51.371 TEST_HEADER include/spdk/iscsi_spec.h 00:02:51.371 TEST_HEADER include/spdk/json.h 00:02:51.371 TEST_HEADER include/spdk/jsonrpc.h 00:02:51.371 TEST_HEADER include/spdk/keyring.h 00:02:51.371 TEST_HEADER include/spdk/keyring_module.h 00:02:51.371 TEST_HEADER include/spdk/likely.h 00:02:51.371 TEST_HEADER include/spdk/lvol.h 00:02:51.371 TEST_HEADER include/spdk/log.h 00:02:51.371 CC app/nvmf_tgt/nvmf_main.o 00:02:51.371 CC app/spdk_dd/spdk_dd.o 00:02:51.371 TEST_HEADER include/spdk/md5.h 00:02:51.371 TEST_HEADER include/spdk/mmio.h 00:02:51.371 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:51.371 TEST_HEADER include/spdk/memory.h 00:02:51.371 TEST_HEADER include/spdk/net.h 00:02:51.371 TEST_HEADER include/spdk/nbd.h 00:02:51.371 TEST_HEADER include/spdk/notify.h 00:02:51.371 TEST_HEADER include/spdk/nvme.h 00:02:51.371 TEST_HEADER include/spdk/nvme_intel.h 00:02:51.371 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:51.371 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:51.371 TEST_HEADER include/spdk/nvme_spec.h 00:02:51.371 TEST_HEADER include/spdk/nvme_zns.h 00:02:51.371 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:51.371 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:51.371 TEST_HEADER include/spdk/nvmf_spec.h 00:02:51.371 TEST_HEADER include/spdk/nvmf.h 00:02:51.371 TEST_HEADER include/spdk/nvmf_transport.h 00:02:51.371 TEST_HEADER include/spdk/opal.h 00:02:51.371 TEST_HEADER include/spdk/opal_spec.h 00:02:51.371 TEST_HEADER include/spdk/pci_ids.h 00:02:51.371 CC app/spdk_tgt/spdk_tgt.o 00:02:51.371 TEST_HEADER include/spdk/pipe.h 00:02:51.371 TEST_HEADER include/spdk/queue.h 00:02:51.371 TEST_HEADER include/spdk/rpc.h 00:02:51.371 TEST_HEADER include/spdk/reduce.h 00:02:51.371 TEST_HEADER include/spdk/scheduler.h 00:02:51.371 TEST_HEADER include/spdk/scsi.h 00:02:51.371 TEST_HEADER include/spdk/scsi_spec.h 00:02:51.371 TEST_HEADER include/spdk/sock.h 00:02:51.371 TEST_HEADER include/spdk/stdinc.h 00:02:51.371 TEST_HEADER include/spdk/string.h 00:02:51.371 TEST_HEADER include/spdk/thread.h 00:02:51.371 TEST_HEADER include/spdk/trace.h 00:02:51.371 TEST_HEADER include/spdk/trace_parser.h 00:02:51.371 TEST_HEADER include/spdk/tree.h 00:02:51.371 TEST_HEADER include/spdk/ublk.h 00:02:51.371 TEST_HEADER include/spdk/util.h 00:02:51.371 TEST_HEADER include/spdk/uuid.h 00:02:51.371 TEST_HEADER include/spdk/version.h 00:02:51.371 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:51.371 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:51.371 TEST_HEADER include/spdk/vhost.h 00:02:51.371 TEST_HEADER include/spdk/vmd.h 00:02:51.371 TEST_HEADER include/spdk/xor.h 00:02:51.371 TEST_HEADER include/spdk/zipf.h 00:02:51.371 CXX test/cpp_headers/accel.o 00:02:51.371 CXX test/cpp_headers/accel_module.o 00:02:51.371 CXX test/cpp_headers/assert.o 00:02:51.371 CXX test/cpp_headers/barrier.o 00:02:51.371 CXX test/cpp_headers/base64.o 00:02:51.371 CXX test/cpp_headers/bdev.o 00:02:51.371 CXX test/cpp_headers/bdev_module.o 00:02:51.371 CXX test/cpp_headers/bdev_zone.o 00:02:51.371 CXX test/cpp_headers/bit_array.o 00:02:51.371 CXX test/cpp_headers/bit_pool.o 00:02:51.371 CXX test/cpp_headers/blob_bdev.o 00:02:51.371 CXX test/cpp_headers/blobfs_bdev.o 00:02:51.371 CXX test/cpp_headers/blobfs.o 00:02:51.371 CXX test/cpp_headers/conf.o 00:02:51.371 CXX test/cpp_headers/blob.o 00:02:51.371 CXX test/cpp_headers/cpuset.o 00:02:51.371 CXX test/cpp_headers/config.o 00:02:51.371 CXX test/cpp_headers/crc16.o 00:02:51.371 CXX test/cpp_headers/crc32.o 00:02:51.371 CXX test/cpp_headers/dif.o 00:02:51.371 CXX test/cpp_headers/crc64.o 00:02:51.371 CXX test/cpp_headers/dma.o 00:02:51.371 CC app/iscsi_tgt/iscsi_tgt.o 00:02:51.371 CXX test/cpp_headers/endian.o 00:02:51.371 CXX test/cpp_headers/env_dpdk.o 00:02:51.371 CXX test/cpp_headers/env.o 00:02:51.371 CXX test/cpp_headers/event.o 00:02:51.371 CXX test/cpp_headers/fd_group.o 00:02:51.371 CXX test/cpp_headers/fd.o 00:02:51.371 CXX test/cpp_headers/file.o 00:02:51.371 CXX test/cpp_headers/fsdev.o 00:02:51.371 CXX test/cpp_headers/fsdev_module.o 00:02:51.371 CXX test/cpp_headers/ftl.o 00:02:51.371 CXX test/cpp_headers/fuse_dispatcher.o 00:02:51.371 CXX test/cpp_headers/gpt_spec.o 00:02:51.371 CXX test/cpp_headers/histogram_data.o 00:02:51.371 CXX test/cpp_headers/hexlify.o 00:02:51.371 CXX test/cpp_headers/idxd.o 00:02:51.371 CXX test/cpp_headers/idxd_spec.o 00:02:51.371 CXX test/cpp_headers/ioat.o 00:02:51.371 CXX test/cpp_headers/init.o 00:02:51.371 CXX test/cpp_headers/ioat_spec.o 00:02:51.371 CXX test/cpp_headers/iscsi_spec.o 00:02:51.371 CXX test/cpp_headers/json.o 00:02:51.371 CC app/fio/nvme/fio_plugin.o 00:02:51.371 CC examples/util/zipf/zipf.o 00:02:51.371 CC examples/ioat/perf/perf.o 00:02:51.371 CC test/thread/poller_perf/poller_perf.o 00:02:51.371 CC test/env/pci/pci_ut.o 00:02:51.371 CC test/app/histogram_perf/histogram_perf.o 00:02:51.371 CC test/env/memory/memory_ut.o 00:02:51.371 CC examples/ioat/verify/verify.o 00:02:51.371 CC test/app/jsoncat/jsoncat.o 00:02:51.371 CC test/env/vtophys/vtophys.o 00:02:51.371 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:51.371 CC test/app/stub/stub.o 00:02:51.371 CC test/dma/test_dma/test_dma.o 00:02:51.371 CC app/fio/bdev/fio_plugin.o 00:02:51.633 CC test/app/bdev_svc/bdev_svc.o 00:02:51.633 LINK spdk_lspci 00:02:51.633 LINK rpc_client_test 00:02:51.633 LINK spdk_nvme_discover 00:02:51.633 CC test/env/mem_callbacks/mem_callbacks.o 00:02:51.633 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:51.633 LINK interrupt_tgt 00:02:51.633 LINK spdk_trace_record 00:02:51.633 LINK nvmf_tgt 00:02:51.894 LINK histogram_perf 00:02:51.894 LINK jsoncat 00:02:51.894 CXX test/cpp_headers/jsonrpc.o 00:02:51.894 LINK poller_perf 00:02:51.894 LINK zipf 00:02:51.894 CXX test/cpp_headers/keyring.o 00:02:51.894 LINK iscsi_tgt 00:02:51.894 LINK vtophys 00:02:51.894 CXX test/cpp_headers/keyring_module.o 00:02:51.894 CXX test/cpp_headers/likely.o 00:02:51.894 CXX test/cpp_headers/log.o 00:02:51.894 LINK spdk_tgt 00:02:51.894 CXX test/cpp_headers/lvol.o 00:02:51.894 CXX test/cpp_headers/md5.o 00:02:51.894 CXX test/cpp_headers/memory.o 00:02:51.894 CXX test/cpp_headers/mmio.o 00:02:51.894 CXX test/cpp_headers/nbd.o 00:02:51.894 CXX test/cpp_headers/net.o 00:02:51.894 CXX test/cpp_headers/notify.o 00:02:51.894 LINK env_dpdk_post_init 00:02:51.894 CXX test/cpp_headers/nvme.o 00:02:51.894 CXX test/cpp_headers/nvme_intel.o 00:02:51.894 CXX test/cpp_headers/nvme_ocssd.o 00:02:51.894 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:51.894 CXX test/cpp_headers/nvme_spec.o 00:02:51.894 CXX test/cpp_headers/nvme_zns.o 00:02:51.894 CXX test/cpp_headers/nvmf_cmd.o 00:02:51.894 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:51.894 LINK verify 00:02:51.894 CXX test/cpp_headers/nvmf.o 00:02:51.894 CXX test/cpp_headers/nvmf_spec.o 00:02:51.894 CXX test/cpp_headers/nvmf_transport.o 00:02:51.894 CXX test/cpp_headers/opal.o 00:02:51.894 CXX test/cpp_headers/opal_spec.o 00:02:51.894 CXX test/cpp_headers/pci_ids.o 00:02:51.894 CXX test/cpp_headers/pipe.o 00:02:51.894 CXX test/cpp_headers/queue.o 00:02:51.894 LINK stub 00:02:51.894 CXX test/cpp_headers/reduce.o 00:02:51.894 CXX test/cpp_headers/rpc.o 00:02:51.894 CXX test/cpp_headers/scheduler.o 00:02:51.894 CXX test/cpp_headers/scsi.o 00:02:51.894 CXX test/cpp_headers/scsi_spec.o 00:02:51.894 CXX test/cpp_headers/sock.o 00:02:51.894 CXX test/cpp_headers/stdinc.o 00:02:51.894 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:51.894 CXX test/cpp_headers/thread.o 00:02:51.894 CXX test/cpp_headers/string.o 00:02:51.894 CXX test/cpp_headers/trace.o 00:02:51.894 LINK ioat_perf 00:02:51.894 LINK bdev_svc 00:02:51.894 CXX test/cpp_headers/trace_parser.o 00:02:51.894 CXX test/cpp_headers/ublk.o 00:02:51.894 CXX test/cpp_headers/tree.o 00:02:51.894 CXX test/cpp_headers/util.o 00:02:51.894 CXX test/cpp_headers/uuid.o 00:02:51.894 CXX test/cpp_headers/version.o 00:02:51.894 CXX test/cpp_headers/vfio_user_pci.o 00:02:52.156 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:52.156 LINK spdk_dd 00:02:52.156 CXX test/cpp_headers/vfio_user_spec.o 00:02:52.156 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:52.156 CXX test/cpp_headers/vhost.o 00:02:52.156 CXX test/cpp_headers/vmd.o 00:02:52.156 CXX test/cpp_headers/xor.o 00:02:52.156 LINK spdk_trace 00:02:52.156 CXX test/cpp_headers/zipf.o 00:02:52.156 LINK pci_ut 00:02:52.415 LINK spdk_nvme 00:02:52.415 LINK test_dma 00:02:52.415 LINK nvme_fuzz 00:02:52.415 LINK spdk_bdev 00:02:52.415 CC test/event/event_perf/event_perf.o 00:02:52.415 CC test/event/reactor/reactor.o 00:02:52.415 CC test/event/reactor_perf/reactor_perf.o 00:02:52.415 CC examples/sock/hello_world/hello_sock.o 00:02:52.415 CC examples/vmd/led/led.o 00:02:52.415 CC examples/idxd/perf/perf.o 00:02:52.415 CC examples/vmd/lsvmd/lsvmd.o 00:02:52.415 CC test/event/app_repeat/app_repeat.o 00:02:52.415 CC test/event/scheduler/scheduler.o 00:02:52.415 CC examples/thread/thread/thread_ex.o 00:02:52.415 LINK mem_callbacks 00:02:52.675 LINK lsvmd 00:02:52.675 LINK reactor 00:02:52.675 LINK reactor_perf 00:02:52.675 LINK led 00:02:52.675 LINK event_perf 00:02:52.675 LINK spdk_nvme_perf 00:02:52.675 LINK vhost_fuzz 00:02:52.675 LINK spdk_top 00:02:52.675 CC app/vhost/vhost.o 00:02:52.675 LINK spdk_nvme_identify 00:02:52.675 LINK app_repeat 00:02:52.675 LINK hello_sock 00:02:52.675 LINK thread 00:02:52.675 LINK idxd_perf 00:02:52.675 LINK scheduler 00:02:52.933 LINK vhost 00:02:52.933 CC test/nvme/sgl/sgl.o 00:02:52.933 CC test/nvme/reserve/reserve.o 00:02:52.933 CC test/nvme/overhead/overhead.o 00:02:52.933 CC test/nvme/startup/startup.o 00:02:52.933 CC test/nvme/aer/aer.o 00:02:52.933 CC test/nvme/compliance/nvme_compliance.o 00:02:52.933 CC test/nvme/boot_partition/boot_partition.o 00:02:52.933 CC test/nvme/simple_copy/simple_copy.o 00:02:52.933 CC test/nvme/reset/reset.o 00:02:52.933 CC test/nvme/fused_ordering/fused_ordering.o 00:02:52.933 CC test/nvme/connect_stress/connect_stress.o 00:02:52.933 CC test/nvme/cuse/cuse.o 00:02:52.933 CC test/nvme/err_injection/err_injection.o 00:02:52.933 CC test/nvme/fdp/fdp.o 00:02:52.933 CC test/nvme/e2edp/nvme_dp.o 00:02:52.933 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:52.933 CC test/accel/dif/dif.o 00:02:52.933 CC test/blobfs/mkfs/mkfs.o 00:02:52.933 LINK memory_ut 00:02:52.933 CC test/lvol/esnap/esnap.o 00:02:53.191 LINK startup 00:02:53.191 LINK boot_partition 00:02:53.191 LINK reserve 00:02:53.191 LINK connect_stress 00:02:53.191 CC examples/nvme/reconnect/reconnect.o 00:02:53.191 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:53.191 CC examples/nvme/hotplug/hotplug.o 00:02:53.191 LINK fused_ordering 00:02:53.191 CC examples/nvme/arbitration/arbitration.o 00:02:53.191 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:53.191 CC examples/nvme/abort/abort.o 00:02:53.191 LINK err_injection 00:02:53.191 CC examples/nvme/hello_world/hello_world.o 00:02:53.191 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:53.191 LINK doorbell_aers 00:02:53.191 LINK simple_copy 00:02:53.191 LINK reset 00:02:53.191 LINK aer 00:02:53.191 LINK mkfs 00:02:53.191 LINK sgl 00:02:53.191 LINK nvme_dp 00:02:53.191 LINK overhead 00:02:53.191 LINK nvme_compliance 00:02:53.191 CC examples/accel/perf/accel_perf.o 00:02:53.191 CC examples/blob/cli/blobcli.o 00:02:53.191 CC examples/blob/hello_world/hello_blob.o 00:02:53.191 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:53.191 LINK fdp 00:02:53.191 LINK cmb_copy 00:02:53.191 LINK pmr_persistence 00:02:53.450 LINK hotplug 00:02:53.450 LINK hello_world 00:02:53.450 LINK arbitration 00:02:53.450 LINK reconnect 00:02:53.450 LINK abort 00:02:53.450 LINK hello_blob 00:02:53.450 LINK hello_fsdev 00:02:53.450 LINK dif 00:02:53.450 LINK nvme_manage 00:02:53.708 LINK iscsi_fuzz 00:02:53.708 LINK accel_perf 00:02:53.708 LINK blobcli 00:02:53.967 LINK cuse 00:02:54.226 CC test/bdev/bdevio/bdevio.o 00:02:54.226 CC examples/bdev/hello_world/hello_bdev.o 00:02:54.226 CC examples/bdev/bdevperf/bdevperf.o 00:02:54.485 LINK hello_bdev 00:02:54.485 LINK bdevio 00:02:54.744 LINK bdevperf 00:02:55.314 CC examples/nvmf/nvmf/nvmf.o 00:02:55.573 LINK nvmf 00:02:56.627 LINK esnap 00:02:56.886 00:02:56.886 real 0m58.708s 00:02:56.886 user 8m20.075s 00:02:56.886 sys 3m25.948s 00:02:56.886 18:08:09 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:56.886 18:08:09 make -- common/autotest_common.sh@10 -- $ set +x 00:02:56.886 ************************************ 00:02:56.886 END TEST make 00:02:56.886 ************************************ 00:02:56.886 18:08:10 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:56.886 18:08:10 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:56.886 18:08:10 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:56.886 18:08:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:56.886 18:08:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:56.886 18:08:10 -- pm/common@44 -- $ pid=3187024 00:02:56.886 18:08:10 -- pm/common@50 -- $ kill -TERM 3187024 00:02:56.886 18:08:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:56.886 18:08:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:56.886 18:08:10 -- pm/common@44 -- $ pid=3187026 00:02:56.886 18:08:10 -- pm/common@50 -- $ kill -TERM 3187026 00:02:56.886 18:08:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:56.886 18:08:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:56.886 18:08:10 -- pm/common@44 -- $ pid=3187028 00:02:56.886 18:08:10 -- pm/common@50 -- $ kill -TERM 3187028 00:02:56.886 18:08:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:56.886 18:08:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:56.886 18:08:10 -- pm/common@44 -- $ pid=3187051 00:02:56.886 18:08:10 -- pm/common@50 -- $ sudo -E kill -TERM 3187051 00:02:57.144 18:08:10 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:02:57.144 18:08:10 -- common/autotest_common.sh@1681 -- # lcov --version 00:02:57.144 18:08:10 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:02:57.144 18:08:10 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:02:57.144 18:08:10 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:57.144 18:08:10 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:57.144 18:08:10 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:57.144 18:08:10 -- scripts/common.sh@336 -- # IFS=.-: 00:02:57.144 18:08:10 -- scripts/common.sh@336 -- # read -ra ver1 00:02:57.144 18:08:10 -- scripts/common.sh@337 -- # IFS=.-: 00:02:57.144 18:08:10 -- scripts/common.sh@337 -- # read -ra ver2 00:02:57.144 18:08:10 -- scripts/common.sh@338 -- # local 'op=<' 00:02:57.144 18:08:10 -- scripts/common.sh@340 -- # ver1_l=2 00:02:57.144 18:08:10 -- scripts/common.sh@341 -- # ver2_l=1 00:02:57.144 18:08:10 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:57.144 18:08:10 -- scripts/common.sh@344 -- # case "$op" in 00:02:57.144 18:08:10 -- scripts/common.sh@345 -- # : 1 00:02:57.144 18:08:10 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:57.144 18:08:10 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:57.144 18:08:10 -- scripts/common.sh@365 -- # decimal 1 00:02:57.144 18:08:10 -- scripts/common.sh@353 -- # local d=1 00:02:57.144 18:08:10 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:57.144 18:08:10 -- scripts/common.sh@355 -- # echo 1 00:02:57.144 18:08:10 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:57.144 18:08:10 -- scripts/common.sh@366 -- # decimal 2 00:02:57.144 18:08:10 -- scripts/common.sh@353 -- # local d=2 00:02:57.144 18:08:10 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:57.144 18:08:10 -- scripts/common.sh@355 -- # echo 2 00:02:57.144 18:08:10 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:57.144 18:08:10 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:57.144 18:08:10 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:57.144 18:08:10 -- scripts/common.sh@368 -- # return 0 00:02:57.144 18:08:10 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:57.144 18:08:10 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:02:57.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:57.144 --rc genhtml_branch_coverage=1 00:02:57.144 --rc genhtml_function_coverage=1 00:02:57.144 --rc genhtml_legend=1 00:02:57.144 --rc geninfo_all_blocks=1 00:02:57.144 --rc geninfo_unexecuted_blocks=1 00:02:57.144 00:02:57.144 ' 00:02:57.144 18:08:10 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:02:57.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:57.144 --rc genhtml_branch_coverage=1 00:02:57.144 --rc genhtml_function_coverage=1 00:02:57.144 --rc genhtml_legend=1 00:02:57.144 --rc geninfo_all_blocks=1 00:02:57.144 --rc geninfo_unexecuted_blocks=1 00:02:57.144 00:02:57.144 ' 00:02:57.144 18:08:10 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:02:57.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:57.144 --rc genhtml_branch_coverage=1 00:02:57.144 --rc genhtml_function_coverage=1 00:02:57.144 --rc genhtml_legend=1 00:02:57.144 --rc geninfo_all_blocks=1 00:02:57.144 --rc geninfo_unexecuted_blocks=1 00:02:57.145 00:02:57.145 ' 00:02:57.145 18:08:10 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:02:57.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:57.145 --rc genhtml_branch_coverage=1 00:02:57.145 --rc genhtml_function_coverage=1 00:02:57.145 --rc genhtml_legend=1 00:02:57.145 --rc geninfo_all_blocks=1 00:02:57.145 --rc geninfo_unexecuted_blocks=1 00:02:57.145 00:02:57.145 ' 00:02:57.145 18:08:10 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:02:57.145 18:08:10 -- nvmf/common.sh@7 -- # uname -s 00:02:57.145 18:08:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:57.145 18:08:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:57.145 18:08:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:57.145 18:08:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:57.145 18:08:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:57.145 18:08:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:57.145 18:08:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:57.145 18:08:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:57.145 18:08:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:57.145 18:08:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:57.145 18:08:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:02:57.145 18:08:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:02:57.145 18:08:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:57.145 18:08:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:57.145 18:08:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:57.145 18:08:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:57.145 18:08:10 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:02:57.145 18:08:10 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:57.145 18:08:10 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:57.145 18:08:10 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:57.145 18:08:10 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:57.145 18:08:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:57.145 18:08:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:57.145 18:08:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:57.145 18:08:10 -- paths/export.sh@5 -- # export PATH 00:02:57.145 18:08:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:57.145 18:08:10 -- nvmf/common.sh@51 -- # : 0 00:02:57.145 18:08:10 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:57.145 18:08:10 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:57.145 18:08:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:57.145 18:08:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:57.145 18:08:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:57.145 18:08:10 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:57.145 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:57.145 18:08:10 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:57.145 18:08:10 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:57.145 18:08:10 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:57.145 18:08:10 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:57.145 18:08:10 -- spdk/autotest.sh@32 -- # uname -s 00:02:57.145 18:08:10 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:57.145 18:08:10 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:57.145 18:08:10 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:57.145 18:08:10 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:57.145 18:08:10 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:57.145 18:08:10 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:57.145 18:08:10 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:57.145 18:08:10 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:57.145 18:08:10 -- spdk/autotest.sh@48 -- # udevadm_pid=3246952 00:02:57.145 18:08:10 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:57.145 18:08:10 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:57.145 18:08:10 -- pm/common@17 -- # local monitor 00:02:57.145 18:08:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:57.145 18:08:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:57.145 18:08:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:57.145 18:08:10 -- pm/common@21 -- # date +%s 00:02:57.145 18:08:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:57.145 18:08:10 -- pm/common@21 -- # date +%s 00:02:57.145 18:08:10 -- pm/common@25 -- # sleep 1 00:02:57.145 18:08:10 -- pm/common@21 -- # date +%s 00:02:57.145 18:08:10 -- pm/common@21 -- # date +%s 00:02:57.145 18:08:10 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728403690 00:02:57.145 18:08:10 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728403690 00:02:57.145 18:08:10 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728403690 00:02:57.145 18:08:10 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728403690 00:02:57.404 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728403690_collect-vmstat.pm.log 00:02:57.404 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728403690_collect-cpu-load.pm.log 00:02:57.404 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728403690_collect-cpu-temp.pm.log 00:02:57.404 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728403690_collect-bmc-pm.bmc.pm.log 00:02:58.344 18:08:11 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:58.344 18:08:11 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:58.344 18:08:11 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:58.344 18:08:11 -- common/autotest_common.sh@10 -- # set +x 00:02:58.344 18:08:11 -- spdk/autotest.sh@59 -- # create_test_list 00:02:58.344 18:08:11 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:58.344 18:08:11 -- common/autotest_common.sh@10 -- # set +x 00:02:58.344 18:08:11 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:02:58.344 18:08:11 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:58.344 18:08:11 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:58.344 18:08:11 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:02:58.344 18:08:11 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:58.344 18:08:11 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:58.344 18:08:11 -- common/autotest_common.sh@1455 -- # uname 00:02:58.344 18:08:11 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:58.344 18:08:11 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:58.344 18:08:11 -- common/autotest_common.sh@1475 -- # uname 00:02:58.344 18:08:11 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:58.344 18:08:11 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:58.344 18:08:11 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:58.344 lcov: LCOV version 1.15 00:02:58.344 18:08:11 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:03:10.562 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:10.562 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:25.451 18:08:36 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:25.451 18:08:36 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:25.451 18:08:36 -- common/autotest_common.sh@10 -- # set +x 00:03:25.451 18:08:36 -- spdk/autotest.sh@78 -- # rm -f 00:03:25.451 18:08:36 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:26.389 0000:5f:00.0 (8086 0a54): Already using the nvme driver 00:03:26.389 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:26.389 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:26.389 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:26.648 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:26.648 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:26.648 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:26.648 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:26.648 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:26.648 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:26.648 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:26.648 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:26.648 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:26.908 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:26.908 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:26.908 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:26.908 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:26.908 18:08:39 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:26.908 18:08:39 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:26.908 18:08:39 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:26.908 18:08:39 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:26.908 18:08:39 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:26.908 18:08:39 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:26.908 18:08:39 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:26.908 18:08:39 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:26.908 18:08:39 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:26.908 18:08:39 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:26.908 18:08:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:26.908 18:08:39 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:26.908 18:08:39 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:26.908 18:08:39 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:26.908 18:08:39 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:26.908 No valid GPT data, bailing 00:03:26.908 18:08:40 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:26.908 18:08:40 -- scripts/common.sh@394 -- # pt= 00:03:26.908 18:08:40 -- scripts/common.sh@395 -- # return 1 00:03:26.908 18:08:40 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:26.908 1+0 records in 00:03:26.908 1+0 records out 00:03:26.908 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00685504 s, 153 MB/s 00:03:26.908 18:08:40 -- spdk/autotest.sh@105 -- # sync 00:03:27.167 18:08:40 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:27.167 18:08:40 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:27.167 18:08:40 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:32.444 18:08:45 -- spdk/autotest.sh@111 -- # uname -s 00:03:32.444 18:08:45 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:32.444 18:08:45 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:32.444 18:08:45 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:35.736 Hugepages 00:03:35.736 node hugesize free / total 00:03:35.736 node0 1048576kB 0 / 0 00:03:35.736 node0 2048kB 0 / 0 00:03:35.736 node1 1048576kB 0 / 0 00:03:35.736 node1 2048kB 0 / 0 00:03:35.736 00:03:35.736 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:35.736 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:35.736 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:35.736 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:35.736 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:35.736 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:35.736 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:35.736 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:35.736 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:35.736 NVMe 0000:5f:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:35.995 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:35.995 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:35.995 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:35.995 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:35.995 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:35.995 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:35.995 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:35.995 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:35.995 18:08:48 -- spdk/autotest.sh@117 -- # uname -s 00:03:35.995 18:08:48 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:35.995 18:08:48 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:35.995 18:08:48 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:39.286 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:39.286 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:39.286 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:39.286 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:39.286 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:39.286 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:39.286 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:39.286 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:39.286 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:39.286 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:39.286 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:39.286 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:39.286 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:39.286 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:39.286 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:39.286 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:42.579 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:03:42.579 18:08:55 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:43.655 18:08:56 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:43.655 18:08:56 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:43.655 18:08:56 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:43.655 18:08:56 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:43.655 18:08:56 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:43.655 18:08:56 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:43.655 18:08:56 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:43.655 18:08:56 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:43.655 18:08:56 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:43.655 18:08:56 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:43.655 18:08:56 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5f:00.0 00:03:43.655 18:08:56 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:46.945 Waiting for block devices as requested 00:03:46.945 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:03:46.945 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:46.945 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:47.204 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:47.204 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:47.204 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:47.464 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:47.464 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:47.464 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:47.723 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:47.723 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:47.723 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:47.983 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:47.983 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:47.983 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:48.243 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:48.243 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:48.243 18:09:01 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:48.243 18:09:01 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:5f:00.0 00:03:48.243 18:09:01 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:03:48.243 18:09:01 -- common/autotest_common.sh@1485 -- # grep 0000:5f:00.0/nvme/nvme 00:03:48.243 18:09:01 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 00:03:48.243 18:09:01 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 ]] 00:03:48.243 18:09:01 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 00:03:48.243 18:09:01 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:48.243 18:09:01 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:48.243 18:09:01 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:48.504 18:09:01 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:48.504 18:09:01 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:48.504 18:09:01 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:48.504 18:09:01 -- common/autotest_common.sh@1529 -- # oacs=' 0xe' 00:03:48.504 18:09:01 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:48.504 18:09:01 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:48.504 18:09:01 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:48.504 18:09:01 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:48.504 18:09:01 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:48.504 18:09:01 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:48.504 18:09:01 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:48.504 18:09:01 -- common/autotest_common.sh@1541 -- # continue 00:03:48.504 18:09:01 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:48.504 18:09:01 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:48.504 18:09:01 -- common/autotest_common.sh@10 -- # set +x 00:03:48.504 18:09:01 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:48.504 18:09:01 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:48.504 18:09:01 -- common/autotest_common.sh@10 -- # set +x 00:03:48.504 18:09:01 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:51.797 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:51.797 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:51.797 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:51.797 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:51.797 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:51.797 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:51.797 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:51.797 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:51.797 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:51.797 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:51.797 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:51.797 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:51.797 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:51.797 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:51.797 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:51.797 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:55.091 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:03:55.091 18:09:08 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:55.091 18:09:08 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:55.091 18:09:08 -- common/autotest_common.sh@10 -- # set +x 00:03:55.091 18:09:08 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:55.091 18:09:08 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:55.091 18:09:08 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:55.091 18:09:08 -- common/autotest_common.sh@1561 -- # bdfs=() 00:03:55.091 18:09:08 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:03:55.091 18:09:08 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:03:55.091 18:09:08 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:03:55.091 18:09:08 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:03:55.091 18:09:08 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:55.091 18:09:08 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:55.091 18:09:08 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:55.091 18:09:08 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:55.091 18:09:08 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:55.091 18:09:08 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:55.091 18:09:08 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5f:00.0 00:03:55.091 18:09:08 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:55.091 18:09:08 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:5f:00.0/device 00:03:55.091 18:09:08 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:03:55.091 18:09:08 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:55.091 18:09:08 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:03:55.091 18:09:08 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:03:55.091 18:09:08 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:5f:00.0 00:03:55.091 18:09:08 -- common/autotest_common.sh@1577 -- # [[ -z 0000:5f:00.0 ]] 00:03:55.091 18:09:08 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=3259982 00:03:55.091 18:09:08 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:55.091 18:09:08 -- common/autotest_common.sh@1583 -- # waitforlisten 3259982 00:03:55.091 18:09:08 -- common/autotest_common.sh@831 -- # '[' -z 3259982 ']' 00:03:55.091 18:09:08 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:55.091 18:09:08 -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:55.091 18:09:08 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:55.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:55.091 18:09:08 -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:55.091 18:09:08 -- common/autotest_common.sh@10 -- # set +x 00:03:55.349 [2024-10-08 18:09:08.306722] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:03:55.349 [2024-10-08 18:09:08.306786] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3259982 ] 00:03:55.349 [2024-10-08 18:09:08.392059] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:55.349 [2024-10-08 18:09:08.482094] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:03:56.284 18:09:09 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:56.284 18:09:09 -- common/autotest_common.sh@864 -- # return 0 00:03:56.284 18:09:09 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:03:56.284 18:09:09 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:03:56.285 18:09:09 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5f:00.0 00:03:59.570 nvme0n1 00:03:59.570 18:09:12 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:59.570 [2024-10-08 18:09:12.329432] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:59.570 request: 00:03:59.570 { 00:03:59.570 "nvme_ctrlr_name": "nvme0", 00:03:59.570 "password": "test", 00:03:59.570 "method": "bdev_nvme_opal_revert", 00:03:59.570 "req_id": 1 00:03:59.570 } 00:03:59.570 Got JSON-RPC error response 00:03:59.570 response: 00:03:59.570 { 00:03:59.570 "code": -32602, 00:03:59.570 "message": "Invalid parameters" 00:03:59.570 } 00:03:59.570 18:09:12 -- common/autotest_common.sh@1589 -- # true 00:03:59.570 18:09:12 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:03:59.570 18:09:12 -- common/autotest_common.sh@1593 -- # killprocess 3259982 00:03:59.570 18:09:12 -- common/autotest_common.sh@950 -- # '[' -z 3259982 ']' 00:03:59.570 18:09:12 -- common/autotest_common.sh@954 -- # kill -0 3259982 00:03:59.570 18:09:12 -- common/autotest_common.sh@955 -- # uname 00:03:59.570 18:09:12 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:59.570 18:09:12 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3259982 00:03:59.570 18:09:12 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:59.570 18:09:12 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:59.570 18:09:12 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3259982' 00:03:59.570 killing process with pid 3259982 00:03:59.570 18:09:12 -- common/autotest_common.sh@969 -- # kill 3259982 00:03:59.570 18:09:12 -- common/autotest_common.sh@974 -- # wait 3259982 00:04:03.806 18:09:16 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:03.806 18:09:16 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:03.806 18:09:16 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:03.806 18:09:16 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:03.806 18:09:16 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:03.806 18:09:16 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:03.806 18:09:16 -- common/autotest_common.sh@10 -- # set +x 00:04:03.806 18:09:16 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:03.806 18:09:16 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:04:03.806 18:09:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:03.806 18:09:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:03.806 18:09:16 -- common/autotest_common.sh@10 -- # set +x 00:04:03.806 ************************************ 00:04:03.806 START TEST env 00:04:03.806 ************************************ 00:04:03.806 18:09:16 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:04:03.806 * Looking for test storage... 00:04:03.806 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:04:03.806 18:09:16 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:03.806 18:09:16 env -- common/autotest_common.sh@1681 -- # lcov --version 00:04:03.806 18:09:16 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:03.806 18:09:16 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:03.806 18:09:16 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:03.806 18:09:16 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:03.806 18:09:16 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:03.806 18:09:16 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:03.806 18:09:16 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:03.806 18:09:16 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:03.806 18:09:16 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:03.806 18:09:16 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:03.806 18:09:16 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:03.806 18:09:16 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:03.806 18:09:16 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:03.806 18:09:16 env -- scripts/common.sh@344 -- # case "$op" in 00:04:03.806 18:09:16 env -- scripts/common.sh@345 -- # : 1 00:04:03.806 18:09:16 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:03.806 18:09:16 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:03.806 18:09:16 env -- scripts/common.sh@365 -- # decimal 1 00:04:03.806 18:09:16 env -- scripts/common.sh@353 -- # local d=1 00:04:03.806 18:09:16 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:03.806 18:09:16 env -- scripts/common.sh@355 -- # echo 1 00:04:03.806 18:09:16 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:03.806 18:09:16 env -- scripts/common.sh@366 -- # decimal 2 00:04:03.806 18:09:16 env -- scripts/common.sh@353 -- # local d=2 00:04:03.806 18:09:16 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:03.806 18:09:16 env -- scripts/common.sh@355 -- # echo 2 00:04:03.806 18:09:16 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:03.806 18:09:16 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:03.807 18:09:16 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:03.807 18:09:16 env -- scripts/common.sh@368 -- # return 0 00:04:03.807 18:09:16 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:03.807 18:09:16 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:03.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.807 --rc genhtml_branch_coverage=1 00:04:03.807 --rc genhtml_function_coverage=1 00:04:03.807 --rc genhtml_legend=1 00:04:03.807 --rc geninfo_all_blocks=1 00:04:03.807 --rc geninfo_unexecuted_blocks=1 00:04:03.807 00:04:03.807 ' 00:04:03.807 18:09:16 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:03.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.807 --rc genhtml_branch_coverage=1 00:04:03.807 --rc genhtml_function_coverage=1 00:04:03.807 --rc genhtml_legend=1 00:04:03.807 --rc geninfo_all_blocks=1 00:04:03.807 --rc geninfo_unexecuted_blocks=1 00:04:03.807 00:04:03.807 ' 00:04:03.807 18:09:16 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:03.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.807 --rc genhtml_branch_coverage=1 00:04:03.807 --rc genhtml_function_coverage=1 00:04:03.807 --rc genhtml_legend=1 00:04:03.807 --rc geninfo_all_blocks=1 00:04:03.807 --rc geninfo_unexecuted_blocks=1 00:04:03.807 00:04:03.807 ' 00:04:03.807 18:09:16 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:03.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.807 --rc genhtml_branch_coverage=1 00:04:03.807 --rc genhtml_function_coverage=1 00:04:03.807 --rc genhtml_legend=1 00:04:03.807 --rc geninfo_all_blocks=1 00:04:03.807 --rc geninfo_unexecuted_blocks=1 00:04:03.807 00:04:03.807 ' 00:04:03.807 18:09:16 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:04:03.807 18:09:16 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:03.807 18:09:16 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:03.807 18:09:16 env -- common/autotest_common.sh@10 -- # set +x 00:04:03.807 ************************************ 00:04:03.807 START TEST env_memory 00:04:03.807 ************************************ 00:04:03.807 18:09:16 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:04:03.807 00:04:03.807 00:04:03.807 CUnit - A unit testing framework for C - Version 2.1-3 00:04:03.807 http://cunit.sourceforge.net/ 00:04:03.807 00:04:03.807 00:04:03.807 Suite: memory 00:04:03.807 Test: alloc and free memory map ...[2024-10-08 18:09:16.679331] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:03.807 passed 00:04:03.807 Test: mem map translation ...[2024-10-08 18:09:16.698374] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:03.807 [2024-10-08 18:09:16.698389] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:03.807 [2024-10-08 18:09:16.698441] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:03.807 [2024-10-08 18:09:16.698450] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:03.807 passed 00:04:03.807 Test: mem map registration ...[2024-10-08 18:09:16.734796] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:03.807 [2024-10-08 18:09:16.734817] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:03.807 passed 00:04:03.807 Test: mem map adjacent registrations ...passed 00:04:03.807 00:04:03.807 Run Summary: Type Total Ran Passed Failed Inactive 00:04:03.807 suites 1 1 n/a 0 0 00:04:03.807 tests 4 4 4 0 0 00:04:03.807 asserts 152 152 152 0 n/a 00:04:03.807 00:04:03.807 Elapsed time = 0.131 seconds 00:04:03.807 00:04:03.807 real 0m0.141s 00:04:03.807 user 0m0.131s 00:04:03.807 sys 0m0.009s 00:04:03.807 18:09:16 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:03.807 18:09:16 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:03.807 ************************************ 00:04:03.807 END TEST env_memory 00:04:03.807 ************************************ 00:04:03.807 18:09:16 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:03.807 18:09:16 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:03.807 18:09:16 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:03.807 18:09:16 env -- common/autotest_common.sh@10 -- # set +x 00:04:03.807 ************************************ 00:04:03.807 START TEST env_vtophys 00:04:03.807 ************************************ 00:04:03.807 18:09:16 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:03.807 EAL: lib.eal log level changed from notice to debug 00:04:03.807 EAL: Detected lcore 0 as core 0 on socket 0 00:04:03.807 EAL: Detected lcore 1 as core 1 on socket 0 00:04:03.807 EAL: Detected lcore 2 as core 2 on socket 0 00:04:03.807 EAL: Detected lcore 3 as core 3 on socket 0 00:04:03.807 EAL: Detected lcore 4 as core 4 on socket 0 00:04:03.807 EAL: Detected lcore 5 as core 8 on socket 0 00:04:03.807 EAL: Detected lcore 6 as core 9 on socket 0 00:04:03.807 EAL: Detected lcore 7 as core 10 on socket 0 00:04:03.807 EAL: Detected lcore 8 as core 11 on socket 0 00:04:03.807 EAL: Detected lcore 9 as core 16 on socket 0 00:04:03.807 EAL: Detected lcore 10 as core 17 on socket 0 00:04:03.807 EAL: Detected lcore 11 as core 18 on socket 0 00:04:03.807 EAL: Detected lcore 12 as core 19 on socket 0 00:04:03.807 EAL: Detected lcore 13 as core 20 on socket 0 00:04:03.807 EAL: Detected lcore 14 as core 24 on socket 0 00:04:03.807 EAL: Detected lcore 15 as core 25 on socket 0 00:04:03.807 EAL: Detected lcore 16 as core 26 on socket 0 00:04:03.807 EAL: Detected lcore 17 as core 27 on socket 0 00:04:03.807 EAL: Detected lcore 18 as core 0 on socket 1 00:04:03.807 EAL: Detected lcore 19 as core 1 on socket 1 00:04:03.808 EAL: Detected lcore 20 as core 2 on socket 1 00:04:03.808 EAL: Detected lcore 21 as core 3 on socket 1 00:04:03.808 EAL: Detected lcore 22 as core 4 on socket 1 00:04:03.808 EAL: Detected lcore 23 as core 8 on socket 1 00:04:03.808 EAL: Detected lcore 24 as core 9 on socket 1 00:04:03.808 EAL: Detected lcore 25 as core 10 on socket 1 00:04:03.808 EAL: Detected lcore 26 as core 11 on socket 1 00:04:03.808 EAL: Detected lcore 27 as core 16 on socket 1 00:04:03.808 EAL: Detected lcore 28 as core 17 on socket 1 00:04:03.808 EAL: Detected lcore 29 as core 18 on socket 1 00:04:03.808 EAL: Detected lcore 30 as core 19 on socket 1 00:04:03.808 EAL: Detected lcore 31 as core 20 on socket 1 00:04:03.808 EAL: Detected lcore 32 as core 24 on socket 1 00:04:03.808 EAL: Detected lcore 33 as core 25 on socket 1 00:04:03.808 EAL: Detected lcore 34 as core 26 on socket 1 00:04:03.808 EAL: Detected lcore 35 as core 27 on socket 1 00:04:03.808 EAL: Detected lcore 36 as core 0 on socket 0 00:04:03.808 EAL: Detected lcore 37 as core 1 on socket 0 00:04:03.808 EAL: Detected lcore 38 as core 2 on socket 0 00:04:03.808 EAL: Detected lcore 39 as core 3 on socket 0 00:04:03.808 EAL: Detected lcore 40 as core 4 on socket 0 00:04:03.808 EAL: Detected lcore 41 as core 8 on socket 0 00:04:03.808 EAL: Detected lcore 42 as core 9 on socket 0 00:04:03.808 EAL: Detected lcore 43 as core 10 on socket 0 00:04:03.808 EAL: Detected lcore 44 as core 11 on socket 0 00:04:03.808 EAL: Detected lcore 45 as core 16 on socket 0 00:04:03.808 EAL: Detected lcore 46 as core 17 on socket 0 00:04:03.808 EAL: Detected lcore 47 as core 18 on socket 0 00:04:03.808 EAL: Detected lcore 48 as core 19 on socket 0 00:04:03.808 EAL: Detected lcore 49 as core 20 on socket 0 00:04:03.808 EAL: Detected lcore 50 as core 24 on socket 0 00:04:03.808 EAL: Detected lcore 51 as core 25 on socket 0 00:04:03.808 EAL: Detected lcore 52 as core 26 on socket 0 00:04:03.808 EAL: Detected lcore 53 as core 27 on socket 0 00:04:03.808 EAL: Detected lcore 54 as core 0 on socket 1 00:04:03.808 EAL: Detected lcore 55 as core 1 on socket 1 00:04:03.808 EAL: Detected lcore 56 as core 2 on socket 1 00:04:03.808 EAL: Detected lcore 57 as core 3 on socket 1 00:04:03.808 EAL: Detected lcore 58 as core 4 on socket 1 00:04:03.808 EAL: Detected lcore 59 as core 8 on socket 1 00:04:03.808 EAL: Detected lcore 60 as core 9 on socket 1 00:04:03.808 EAL: Detected lcore 61 as core 10 on socket 1 00:04:03.808 EAL: Detected lcore 62 as core 11 on socket 1 00:04:03.808 EAL: Detected lcore 63 as core 16 on socket 1 00:04:03.808 EAL: Detected lcore 64 as core 17 on socket 1 00:04:03.808 EAL: Detected lcore 65 as core 18 on socket 1 00:04:03.808 EAL: Detected lcore 66 as core 19 on socket 1 00:04:03.808 EAL: Detected lcore 67 as core 20 on socket 1 00:04:03.808 EAL: Detected lcore 68 as core 24 on socket 1 00:04:03.808 EAL: Detected lcore 69 as core 25 on socket 1 00:04:03.808 EAL: Detected lcore 70 as core 26 on socket 1 00:04:03.808 EAL: Detected lcore 71 as core 27 on socket 1 00:04:03.808 EAL: Maximum logical cores by configuration: 128 00:04:03.808 EAL: Detected CPU lcores: 72 00:04:03.808 EAL: Detected NUMA nodes: 2 00:04:03.808 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:03.808 EAL: Detected shared linkage of DPDK 00:04:03.808 EAL: No shared files mode enabled, IPC will be disabled 00:04:03.808 EAL: Bus pci wants IOVA as 'DC' 00:04:03.808 EAL: Buses did not request a specific IOVA mode. 00:04:03.808 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:03.808 EAL: Selected IOVA mode 'VA' 00:04:03.808 EAL: Probing VFIO support... 00:04:03.808 EAL: IOMMU type 1 (Type 1) is supported 00:04:03.808 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:03.808 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:03.808 EAL: VFIO support initialized 00:04:03.808 EAL: Ask a virtual area of 0x2e000 bytes 00:04:03.808 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:03.808 EAL: Setting up physically contiguous memory... 00:04:03.808 EAL: Setting maximum number of open files to 524288 00:04:03.808 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:03.808 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:03.808 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:03.808 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.808 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:03.808 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:03.808 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.808 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:03.808 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:03.808 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.808 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:03.808 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:03.808 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.808 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:03.808 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:03.808 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.808 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:03.808 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:03.808 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.808 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:03.808 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:03.808 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.808 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:03.808 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:03.808 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.808 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:03.808 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:03.808 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:03.809 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.809 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:03.809 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:03.809 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.809 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:03.809 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:03.809 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.809 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:03.809 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:03.809 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.809 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:03.809 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:03.809 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.809 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:03.809 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:03.809 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.809 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:03.809 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:03.809 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.809 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:03.809 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:03.809 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.809 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:03.809 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:03.809 EAL: Hugepages will be freed exactly as allocated. 00:04:03.809 EAL: No shared files mode enabled, IPC is disabled 00:04:03.809 EAL: No shared files mode enabled, IPC is disabled 00:04:03.809 EAL: TSC frequency is ~2300000 KHz 00:04:03.809 EAL: Main lcore 0 is ready (tid=7f15d5ca8a00;cpuset=[0]) 00:04:03.809 EAL: Trying to obtain current memory policy. 00:04:03.809 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.809 EAL: Restoring previous memory policy: 0 00:04:03.809 EAL: request: mp_malloc_sync 00:04:03.809 EAL: No shared files mode enabled, IPC is disabled 00:04:03.809 EAL: Heap on socket 0 was expanded by 2MB 00:04:03.809 EAL: No shared files mode enabled, IPC is disabled 00:04:03.809 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:03.809 EAL: Mem event callback 'spdk:(nil)' registered 00:04:03.809 00:04:03.809 00:04:03.809 CUnit - A unit testing framework for C - Version 2.1-3 00:04:03.809 http://cunit.sourceforge.net/ 00:04:03.809 00:04:03.809 00:04:03.809 Suite: components_suite 00:04:03.809 Test: vtophys_malloc_test ...passed 00:04:03.809 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:03.809 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.809 EAL: Restoring previous memory policy: 4 00:04:03.809 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.809 EAL: request: mp_malloc_sync 00:04:03.809 EAL: No shared files mode enabled, IPC is disabled 00:04:03.809 EAL: Heap on socket 0 was expanded by 4MB 00:04:03.809 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.809 EAL: request: mp_malloc_sync 00:04:03.809 EAL: No shared files mode enabled, IPC is disabled 00:04:03.809 EAL: Heap on socket 0 was shrunk by 4MB 00:04:03.809 EAL: Trying to obtain current memory policy. 00:04:03.809 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.809 EAL: Restoring previous memory policy: 4 00:04:03.809 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.809 EAL: request: mp_malloc_sync 00:04:03.809 EAL: No shared files mode enabled, IPC is disabled 00:04:03.809 EAL: Heap on socket 0 was expanded by 6MB 00:04:03.809 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.809 EAL: request: mp_malloc_sync 00:04:03.809 EAL: No shared files mode enabled, IPC is disabled 00:04:03.809 EAL: Heap on socket 0 was shrunk by 6MB 00:04:03.809 EAL: Trying to obtain current memory policy. 00:04:03.809 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.069 EAL: Restoring previous memory policy: 4 00:04:04.069 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.069 EAL: request: mp_malloc_sync 00:04:04.069 EAL: No shared files mode enabled, IPC is disabled 00:04:04.069 EAL: Heap on socket 0 was expanded by 10MB 00:04:04.069 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.069 EAL: request: mp_malloc_sync 00:04:04.069 EAL: No shared files mode enabled, IPC is disabled 00:04:04.069 EAL: Heap on socket 0 was shrunk by 10MB 00:04:04.069 EAL: Trying to obtain current memory policy. 00:04:04.069 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.069 EAL: Restoring previous memory policy: 4 00:04:04.069 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.069 EAL: request: mp_malloc_sync 00:04:04.069 EAL: No shared files mode enabled, IPC is disabled 00:04:04.069 EAL: Heap on socket 0 was expanded by 18MB 00:04:04.069 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.069 EAL: request: mp_malloc_sync 00:04:04.069 EAL: No shared files mode enabled, IPC is disabled 00:04:04.069 EAL: Heap on socket 0 was shrunk by 18MB 00:04:04.069 EAL: Trying to obtain current memory policy. 00:04:04.069 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.069 EAL: Restoring previous memory policy: 4 00:04:04.069 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.069 EAL: request: mp_malloc_sync 00:04:04.069 EAL: No shared files mode enabled, IPC is disabled 00:04:04.069 EAL: Heap on socket 0 was expanded by 34MB 00:04:04.069 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.069 EAL: request: mp_malloc_sync 00:04:04.069 EAL: No shared files mode enabled, IPC is disabled 00:04:04.069 EAL: Heap on socket 0 was shrunk by 34MB 00:04:04.069 EAL: Trying to obtain current memory policy. 00:04:04.069 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.069 EAL: Restoring previous memory policy: 4 00:04:04.070 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.070 EAL: request: mp_malloc_sync 00:04:04.070 EAL: No shared files mode enabled, IPC is disabled 00:04:04.070 EAL: Heap on socket 0 was expanded by 66MB 00:04:04.070 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.070 EAL: request: mp_malloc_sync 00:04:04.070 EAL: No shared files mode enabled, IPC is disabled 00:04:04.070 EAL: Heap on socket 0 was shrunk by 66MB 00:04:04.070 EAL: Trying to obtain current memory policy. 00:04:04.070 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.070 EAL: Restoring previous memory policy: 4 00:04:04.070 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.070 EAL: request: mp_malloc_sync 00:04:04.070 EAL: No shared files mode enabled, IPC is disabled 00:04:04.070 EAL: Heap on socket 0 was expanded by 130MB 00:04:04.070 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.070 EAL: request: mp_malloc_sync 00:04:04.070 EAL: No shared files mode enabled, IPC is disabled 00:04:04.070 EAL: Heap on socket 0 was shrunk by 130MB 00:04:04.070 EAL: Trying to obtain current memory policy. 00:04:04.070 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.070 EAL: Restoring previous memory policy: 4 00:04:04.070 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.070 EAL: request: mp_malloc_sync 00:04:04.070 EAL: No shared files mode enabled, IPC is disabled 00:04:04.070 EAL: Heap on socket 0 was expanded by 258MB 00:04:04.070 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.329 EAL: request: mp_malloc_sync 00:04:04.329 EAL: No shared files mode enabled, IPC is disabled 00:04:04.329 EAL: Heap on socket 0 was shrunk by 258MB 00:04:04.329 EAL: Trying to obtain current memory policy. 00:04:04.329 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.329 EAL: Restoring previous memory policy: 4 00:04:04.329 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.329 EAL: request: mp_malloc_sync 00:04:04.329 EAL: No shared files mode enabled, IPC is disabled 00:04:04.329 EAL: Heap on socket 0 was expanded by 514MB 00:04:04.329 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.588 EAL: request: mp_malloc_sync 00:04:04.588 EAL: No shared files mode enabled, IPC is disabled 00:04:04.588 EAL: Heap on socket 0 was shrunk by 514MB 00:04:04.588 EAL: Trying to obtain current memory policy. 00:04:04.589 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.848 EAL: Restoring previous memory policy: 4 00:04:04.848 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.848 EAL: request: mp_malloc_sync 00:04:04.848 EAL: No shared files mode enabled, IPC is disabled 00:04:04.848 EAL: Heap on socket 0 was expanded by 1026MB 00:04:04.848 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.107 EAL: request: mp_malloc_sync 00:04:05.107 EAL: No shared files mode enabled, IPC is disabled 00:04:05.107 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:05.107 passed 00:04:05.107 00:04:05.107 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.107 suites 1 1 n/a 0 0 00:04:05.107 tests 2 2 2 0 0 00:04:05.107 asserts 497 497 497 0 n/a 00:04:05.107 00:04:05.107 Elapsed time = 1.148 seconds 00:04:05.107 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.107 EAL: request: mp_malloc_sync 00:04:05.107 EAL: No shared files mode enabled, IPC is disabled 00:04:05.107 EAL: Heap on socket 0 was shrunk by 2MB 00:04:05.107 EAL: No shared files mode enabled, IPC is disabled 00:04:05.107 EAL: No shared files mode enabled, IPC is disabled 00:04:05.107 EAL: No shared files mode enabled, IPC is disabled 00:04:05.107 00:04:05.107 real 0m1.287s 00:04:05.107 user 0m0.749s 00:04:05.107 sys 0m0.511s 00:04:05.107 18:09:18 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:05.107 18:09:18 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:05.107 ************************************ 00:04:05.107 END TEST env_vtophys 00:04:05.107 ************************************ 00:04:05.107 18:09:18 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:04:05.107 18:09:18 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:05.107 18:09:18 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:05.107 18:09:18 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.107 ************************************ 00:04:05.107 START TEST env_pci 00:04:05.107 ************************************ 00:04:05.107 18:09:18 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:04:05.107 00:04:05.107 00:04:05.107 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.107 http://cunit.sourceforge.net/ 00:04:05.107 00:04:05.107 00:04:05.107 Suite: pci 00:04:05.107 Test: pci_hook ...[2024-10-08 18:09:18.252883] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1111:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3261372 has claimed it 00:04:05.368 EAL: Cannot find device (10000:00:01.0) 00:04:05.368 EAL: Failed to attach device on primary process 00:04:05.368 passed 00:04:05.368 00:04:05.368 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.368 suites 1 1 n/a 0 0 00:04:05.368 tests 1 1 1 0 0 00:04:05.368 asserts 25 25 25 0 n/a 00:04:05.368 00:04:05.368 Elapsed time = 0.034 seconds 00:04:05.368 00:04:05.368 real 0m0.057s 00:04:05.368 user 0m0.017s 00:04:05.368 sys 0m0.040s 00:04:05.368 18:09:18 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:05.368 18:09:18 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:05.368 ************************************ 00:04:05.368 END TEST env_pci 00:04:05.368 ************************************ 00:04:05.368 18:09:18 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:05.368 18:09:18 env -- env/env.sh@15 -- # uname 00:04:05.368 18:09:18 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:05.368 18:09:18 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:05.368 18:09:18 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:05.368 18:09:18 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:05.368 18:09:18 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:05.368 18:09:18 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.368 ************************************ 00:04:05.368 START TEST env_dpdk_post_init 00:04:05.368 ************************************ 00:04:05.368 18:09:18 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:05.368 EAL: Detected CPU lcores: 72 00:04:05.368 EAL: Detected NUMA nodes: 2 00:04:05.368 EAL: Detected shared linkage of DPDK 00:04:05.368 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:05.368 EAL: Selected IOVA mode 'VA' 00:04:05.368 EAL: VFIO support initialized 00:04:05.368 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:05.368 EAL: Using IOMMU type 1 (Type 1) 00:04:05.368 EAL: Ignore mapping IO port bar(1) 00:04:05.368 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:05.626 EAL: Ignore mapping IO port bar(1) 00:04:05.626 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:05.626 EAL: Ignore mapping IO port bar(1) 00:04:05.626 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:05.626 EAL: Ignore mapping IO port bar(1) 00:04:05.626 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:05.626 EAL: Ignore mapping IO port bar(1) 00:04:05.626 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:05.626 EAL: Ignore mapping IO port bar(1) 00:04:05.627 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:05.627 EAL: Ignore mapping IO port bar(1) 00:04:05.627 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:05.627 EAL: Ignore mapping IO port bar(1) 00:04:05.627 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:06.195 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5f:00.0 (socket 0) 00:04:06.455 EAL: Ignore mapping IO port bar(1) 00:04:06.455 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:06.455 EAL: Ignore mapping IO port bar(1) 00:04:06.455 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:06.455 EAL: Ignore mapping IO port bar(1) 00:04:06.455 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:06.455 EAL: Ignore mapping IO port bar(1) 00:04:06.455 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:06.455 EAL: Ignore mapping IO port bar(1) 00:04:06.455 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:06.455 EAL: Ignore mapping IO port bar(1) 00:04:06.455 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:06.455 EAL: Ignore mapping IO port bar(1) 00:04:06.455 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:06.455 EAL: Ignore mapping IO port bar(1) 00:04:06.455 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:11.723 EAL: Releasing PCI mapped resource for 0000:5f:00.0 00:04:11.723 EAL: Calling pci_unmap_resource for 0000:5f:00.0 at 0x202001020000 00:04:11.982 Starting DPDK initialization... 00:04:11.982 Starting SPDK post initialization... 00:04:11.982 SPDK NVMe probe 00:04:11.982 Attaching to 0000:5f:00.0 00:04:11.982 Attached to 0000:5f:00.0 00:04:11.982 Cleaning up... 00:04:11.982 00:04:11.982 real 0m6.676s 00:04:11.982 user 0m4.872s 00:04:11.982 sys 0m0.865s 00:04:11.982 18:09:25 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:11.982 18:09:25 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:11.982 ************************************ 00:04:11.982 END TEST env_dpdk_post_init 00:04:11.982 ************************************ 00:04:11.982 18:09:25 env -- env/env.sh@26 -- # uname 00:04:11.982 18:09:25 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:11.982 18:09:25 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:11.982 18:09:25 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:11.982 18:09:25 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:11.982 18:09:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.242 ************************************ 00:04:12.242 START TEST env_mem_callbacks 00:04:12.242 ************************************ 00:04:12.242 18:09:25 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:12.242 EAL: Detected CPU lcores: 72 00:04:12.242 EAL: Detected NUMA nodes: 2 00:04:12.242 EAL: Detected shared linkage of DPDK 00:04:12.242 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:12.242 EAL: Selected IOVA mode 'VA' 00:04:12.242 EAL: VFIO support initialized 00:04:12.242 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:12.242 00:04:12.242 00:04:12.242 CUnit - A unit testing framework for C - Version 2.1-3 00:04:12.242 http://cunit.sourceforge.net/ 00:04:12.242 00:04:12.242 00:04:12.242 Suite: memory 00:04:12.242 Test: test ... 00:04:12.242 register 0x200000200000 2097152 00:04:12.242 malloc 3145728 00:04:12.242 register 0x200000400000 4194304 00:04:12.242 buf 0x200000500000 len 3145728 PASSED 00:04:12.242 malloc 64 00:04:12.242 buf 0x2000004fff40 len 64 PASSED 00:04:12.242 malloc 4194304 00:04:12.242 register 0x200000800000 6291456 00:04:12.242 buf 0x200000a00000 len 4194304 PASSED 00:04:12.242 free 0x200000500000 3145728 00:04:12.242 free 0x2000004fff40 64 00:04:12.242 unregister 0x200000400000 4194304 PASSED 00:04:12.242 free 0x200000a00000 4194304 00:04:12.242 unregister 0x200000800000 6291456 PASSED 00:04:12.242 malloc 8388608 00:04:12.242 register 0x200000400000 10485760 00:04:12.242 buf 0x200000600000 len 8388608 PASSED 00:04:12.242 free 0x200000600000 8388608 00:04:12.242 unregister 0x200000400000 10485760 PASSED 00:04:12.242 passed 00:04:12.242 00:04:12.242 Run Summary: Type Total Ran Passed Failed Inactive 00:04:12.242 suites 1 1 n/a 0 0 00:04:12.242 tests 1 1 1 0 0 00:04:12.242 asserts 15 15 15 0 n/a 00:04:12.242 00:04:12.242 Elapsed time = 0.010 seconds 00:04:12.242 00:04:12.242 real 0m0.074s 00:04:12.242 user 0m0.023s 00:04:12.242 sys 0m0.049s 00:04:12.242 18:09:25 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:12.242 18:09:25 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:12.242 ************************************ 00:04:12.242 END TEST env_mem_callbacks 00:04:12.242 ************************************ 00:04:12.242 00:04:12.242 real 0m8.863s 00:04:12.242 user 0m6.057s 00:04:12.242 sys 0m1.887s 00:04:12.242 18:09:25 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:12.242 18:09:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.242 ************************************ 00:04:12.242 END TEST env 00:04:12.242 ************************************ 00:04:12.242 18:09:25 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:04:12.242 18:09:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:12.242 18:09:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:12.242 18:09:25 -- common/autotest_common.sh@10 -- # set +x 00:04:12.242 ************************************ 00:04:12.242 START TEST rpc 00:04:12.242 ************************************ 00:04:12.242 18:09:25 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:04:12.502 * Looking for test storage... 00:04:12.502 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:12.502 18:09:25 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:12.502 18:09:25 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:12.502 18:09:25 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:12.502 18:09:25 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:12.502 18:09:25 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:12.502 18:09:25 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:12.502 18:09:25 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:12.502 18:09:25 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:12.502 18:09:25 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:12.502 18:09:25 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:12.502 18:09:25 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:12.502 18:09:25 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:12.502 18:09:25 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:12.502 18:09:25 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:12.502 18:09:25 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:12.502 18:09:25 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:12.502 18:09:25 rpc -- scripts/common.sh@345 -- # : 1 00:04:12.502 18:09:25 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:12.502 18:09:25 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:12.502 18:09:25 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:12.502 18:09:25 rpc -- scripts/common.sh@353 -- # local d=1 00:04:12.502 18:09:25 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:12.502 18:09:25 rpc -- scripts/common.sh@355 -- # echo 1 00:04:12.502 18:09:25 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:12.502 18:09:25 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:12.502 18:09:25 rpc -- scripts/common.sh@353 -- # local d=2 00:04:12.502 18:09:25 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:12.502 18:09:25 rpc -- scripts/common.sh@355 -- # echo 2 00:04:12.502 18:09:25 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:12.502 18:09:25 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:12.502 18:09:25 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:12.502 18:09:25 rpc -- scripts/common.sh@368 -- # return 0 00:04:12.502 18:09:25 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:12.502 18:09:25 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:12.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.502 --rc genhtml_branch_coverage=1 00:04:12.502 --rc genhtml_function_coverage=1 00:04:12.502 --rc genhtml_legend=1 00:04:12.502 --rc geninfo_all_blocks=1 00:04:12.502 --rc geninfo_unexecuted_blocks=1 00:04:12.502 00:04:12.502 ' 00:04:12.502 18:09:25 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:12.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.502 --rc genhtml_branch_coverage=1 00:04:12.502 --rc genhtml_function_coverage=1 00:04:12.502 --rc genhtml_legend=1 00:04:12.502 --rc geninfo_all_blocks=1 00:04:12.502 --rc geninfo_unexecuted_blocks=1 00:04:12.502 00:04:12.502 ' 00:04:12.502 18:09:25 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:12.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.502 --rc genhtml_branch_coverage=1 00:04:12.502 --rc genhtml_function_coverage=1 00:04:12.502 --rc genhtml_legend=1 00:04:12.502 --rc geninfo_all_blocks=1 00:04:12.502 --rc geninfo_unexecuted_blocks=1 00:04:12.502 00:04:12.502 ' 00:04:12.502 18:09:25 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:12.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.502 --rc genhtml_branch_coverage=1 00:04:12.502 --rc genhtml_function_coverage=1 00:04:12.502 --rc genhtml_legend=1 00:04:12.502 --rc geninfo_all_blocks=1 00:04:12.502 --rc geninfo_unexecuted_blocks=1 00:04:12.502 00:04:12.502 ' 00:04:12.502 18:09:25 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3262417 00:04:12.502 18:09:25 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:12.502 18:09:25 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:12.502 18:09:25 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3262417 00:04:12.502 18:09:25 rpc -- common/autotest_common.sh@831 -- # '[' -z 3262417 ']' 00:04:12.502 18:09:25 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:12.502 18:09:25 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:12.502 18:09:25 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:12.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:12.502 18:09:25 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:12.502 18:09:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.502 [2024-10-08 18:09:25.613230] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:04:12.502 [2024-10-08 18:09:25.613292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3262417 ] 00:04:12.761 [2024-10-08 18:09:25.698397] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.761 [2024-10-08 18:09:25.785622] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:12.762 [2024-10-08 18:09:25.785662] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3262417' to capture a snapshot of events at runtime. 00:04:12.762 [2024-10-08 18:09:25.785671] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:12.762 [2024-10-08 18:09:25.785694] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:12.762 [2024-10-08 18:09:25.785702] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3262417 for offline analysis/debug. 00:04:12.762 [2024-10-08 18:09:25.786150] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.329 18:09:26 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:13.329 18:09:26 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:13.329 18:09:26 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:13.329 18:09:26 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:13.329 18:09:26 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:13.329 18:09:26 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:13.329 18:09:26 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:13.329 18:09:26 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:13.329 18:09:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.329 ************************************ 00:04:13.329 START TEST rpc_integrity 00:04:13.329 ************************************ 00:04:13.329 18:09:26 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:13.329 18:09:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:13.329 18:09:26 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.329 18:09:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.329 18:09:26 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.329 18:09:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:13.329 18:09:26 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:13.588 18:09:26 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:13.588 18:09:26 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:13.588 18:09:26 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.588 18:09:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.588 18:09:26 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.588 18:09:26 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:13.588 18:09:26 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:13.588 18:09:26 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.588 18:09:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.588 18:09:26 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.588 18:09:26 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:13.588 { 00:04:13.588 "name": "Malloc0", 00:04:13.588 "aliases": [ 00:04:13.588 "bafc465d-8069-4e94-9c06-11776062e61c" 00:04:13.588 ], 00:04:13.588 "product_name": "Malloc disk", 00:04:13.588 "block_size": 512, 00:04:13.588 "num_blocks": 16384, 00:04:13.588 "uuid": "bafc465d-8069-4e94-9c06-11776062e61c", 00:04:13.588 "assigned_rate_limits": { 00:04:13.588 "rw_ios_per_sec": 0, 00:04:13.588 "rw_mbytes_per_sec": 0, 00:04:13.588 "r_mbytes_per_sec": 0, 00:04:13.588 "w_mbytes_per_sec": 0 00:04:13.588 }, 00:04:13.588 "claimed": false, 00:04:13.588 "zoned": false, 00:04:13.588 "supported_io_types": { 00:04:13.588 "read": true, 00:04:13.588 "write": true, 00:04:13.588 "unmap": true, 00:04:13.588 "flush": true, 00:04:13.588 "reset": true, 00:04:13.588 "nvme_admin": false, 00:04:13.588 "nvme_io": false, 00:04:13.588 "nvme_io_md": false, 00:04:13.588 "write_zeroes": true, 00:04:13.588 "zcopy": true, 00:04:13.588 "get_zone_info": false, 00:04:13.588 "zone_management": false, 00:04:13.588 "zone_append": false, 00:04:13.588 "compare": false, 00:04:13.588 "compare_and_write": false, 00:04:13.588 "abort": true, 00:04:13.588 "seek_hole": false, 00:04:13.588 "seek_data": false, 00:04:13.588 "copy": true, 00:04:13.588 "nvme_iov_md": false 00:04:13.588 }, 00:04:13.588 "memory_domains": [ 00:04:13.588 { 00:04:13.588 "dma_device_id": "system", 00:04:13.588 "dma_device_type": 1 00:04:13.588 }, 00:04:13.588 { 00:04:13.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.588 "dma_device_type": 2 00:04:13.588 } 00:04:13.588 ], 00:04:13.588 "driver_specific": {} 00:04:13.588 } 00:04:13.588 ]' 00:04:13.588 18:09:26 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:13.588 18:09:26 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:13.588 18:09:26 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:13.588 18:09:26 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.588 18:09:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.588 [2024-10-08 18:09:26.607290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:13.588 [2024-10-08 18:09:26.607320] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:13.588 [2024-10-08 18:09:26.607334] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbe1ad0 00:04:13.588 [2024-10-08 18:09:26.607346] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:13.588 [2024-10-08 18:09:26.608483] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:13.588 [2024-10-08 18:09:26.608505] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:13.588 Passthru0 00:04:13.588 18:09:26 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.588 18:09:26 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:13.588 18:09:26 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.588 18:09:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.588 18:09:26 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.588 18:09:26 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:13.588 { 00:04:13.588 "name": "Malloc0", 00:04:13.588 "aliases": [ 00:04:13.588 "bafc465d-8069-4e94-9c06-11776062e61c" 00:04:13.588 ], 00:04:13.588 "product_name": "Malloc disk", 00:04:13.588 "block_size": 512, 00:04:13.588 "num_blocks": 16384, 00:04:13.588 "uuid": "bafc465d-8069-4e94-9c06-11776062e61c", 00:04:13.588 "assigned_rate_limits": { 00:04:13.588 "rw_ios_per_sec": 0, 00:04:13.588 "rw_mbytes_per_sec": 0, 00:04:13.588 "r_mbytes_per_sec": 0, 00:04:13.588 "w_mbytes_per_sec": 0 00:04:13.588 }, 00:04:13.588 "claimed": true, 00:04:13.588 "claim_type": "exclusive_write", 00:04:13.588 "zoned": false, 00:04:13.588 "supported_io_types": { 00:04:13.588 "read": true, 00:04:13.588 "write": true, 00:04:13.588 "unmap": true, 00:04:13.588 "flush": true, 00:04:13.588 "reset": true, 00:04:13.588 "nvme_admin": false, 00:04:13.588 "nvme_io": false, 00:04:13.588 "nvme_io_md": false, 00:04:13.588 "write_zeroes": true, 00:04:13.588 "zcopy": true, 00:04:13.588 "get_zone_info": false, 00:04:13.588 "zone_management": false, 00:04:13.588 "zone_append": false, 00:04:13.588 "compare": false, 00:04:13.588 "compare_and_write": false, 00:04:13.588 "abort": true, 00:04:13.588 "seek_hole": false, 00:04:13.588 "seek_data": false, 00:04:13.588 "copy": true, 00:04:13.588 "nvme_iov_md": false 00:04:13.588 }, 00:04:13.588 "memory_domains": [ 00:04:13.588 { 00:04:13.588 "dma_device_id": "system", 00:04:13.588 "dma_device_type": 1 00:04:13.588 }, 00:04:13.588 { 00:04:13.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.588 "dma_device_type": 2 00:04:13.588 } 00:04:13.588 ], 00:04:13.588 "driver_specific": {} 00:04:13.588 }, 00:04:13.588 { 00:04:13.588 "name": "Passthru0", 00:04:13.588 "aliases": [ 00:04:13.588 "ca035da3-d031-5f94-96ce-daadce313fe1" 00:04:13.588 ], 00:04:13.588 "product_name": "passthru", 00:04:13.588 "block_size": 512, 00:04:13.588 "num_blocks": 16384, 00:04:13.588 "uuid": "ca035da3-d031-5f94-96ce-daadce313fe1", 00:04:13.588 "assigned_rate_limits": { 00:04:13.588 "rw_ios_per_sec": 0, 00:04:13.588 "rw_mbytes_per_sec": 0, 00:04:13.588 "r_mbytes_per_sec": 0, 00:04:13.588 "w_mbytes_per_sec": 0 00:04:13.588 }, 00:04:13.588 "claimed": false, 00:04:13.588 "zoned": false, 00:04:13.588 "supported_io_types": { 00:04:13.588 "read": true, 00:04:13.588 "write": true, 00:04:13.588 "unmap": true, 00:04:13.588 "flush": true, 00:04:13.588 "reset": true, 00:04:13.588 "nvme_admin": false, 00:04:13.588 "nvme_io": false, 00:04:13.588 "nvme_io_md": false, 00:04:13.588 "write_zeroes": true, 00:04:13.588 "zcopy": true, 00:04:13.588 "get_zone_info": false, 00:04:13.588 "zone_management": false, 00:04:13.588 "zone_append": false, 00:04:13.588 "compare": false, 00:04:13.588 "compare_and_write": false, 00:04:13.588 "abort": true, 00:04:13.588 "seek_hole": false, 00:04:13.588 "seek_data": false, 00:04:13.588 "copy": true, 00:04:13.588 "nvme_iov_md": false 00:04:13.588 }, 00:04:13.588 "memory_domains": [ 00:04:13.588 { 00:04:13.588 "dma_device_id": "system", 00:04:13.588 "dma_device_type": 1 00:04:13.588 }, 00:04:13.588 { 00:04:13.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.588 "dma_device_type": 2 00:04:13.589 } 00:04:13.589 ], 00:04:13.589 "driver_specific": { 00:04:13.589 "passthru": { 00:04:13.589 "name": "Passthru0", 00:04:13.589 "base_bdev_name": "Malloc0" 00:04:13.589 } 00:04:13.589 } 00:04:13.589 } 00:04:13.589 ]' 00:04:13.589 18:09:26 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:13.589 18:09:26 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:13.589 18:09:26 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:13.589 18:09:26 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.589 18:09:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.589 18:09:26 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.589 18:09:26 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:13.589 18:09:26 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.589 18:09:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.589 18:09:26 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.589 18:09:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:13.589 18:09:26 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.589 18:09:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.589 18:09:26 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.589 18:09:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:13.589 18:09:26 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:13.589 18:09:26 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:13.589 00:04:13.589 real 0m0.278s 00:04:13.589 user 0m0.163s 00:04:13.589 sys 0m0.053s 00:04:13.589 18:09:26 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:13.589 18:09:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.589 ************************************ 00:04:13.589 END TEST rpc_integrity 00:04:13.589 ************************************ 00:04:13.848 18:09:26 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:13.848 18:09:26 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:13.848 18:09:26 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:13.848 18:09:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.848 ************************************ 00:04:13.848 START TEST rpc_plugins 00:04:13.848 ************************************ 00:04:13.848 18:09:26 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:13.848 18:09:26 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:13.848 18:09:26 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.848 18:09:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:13.848 18:09:26 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.848 18:09:26 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:13.848 18:09:26 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:13.848 18:09:26 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.848 18:09:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:13.848 18:09:26 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.848 18:09:26 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:13.848 { 00:04:13.848 "name": "Malloc1", 00:04:13.848 "aliases": [ 00:04:13.848 "95d2dfeb-eed2-427a-90cd-b86af905b97c" 00:04:13.848 ], 00:04:13.848 "product_name": "Malloc disk", 00:04:13.848 "block_size": 4096, 00:04:13.848 "num_blocks": 256, 00:04:13.848 "uuid": "95d2dfeb-eed2-427a-90cd-b86af905b97c", 00:04:13.848 "assigned_rate_limits": { 00:04:13.848 "rw_ios_per_sec": 0, 00:04:13.848 "rw_mbytes_per_sec": 0, 00:04:13.848 "r_mbytes_per_sec": 0, 00:04:13.848 "w_mbytes_per_sec": 0 00:04:13.848 }, 00:04:13.848 "claimed": false, 00:04:13.848 "zoned": false, 00:04:13.848 "supported_io_types": { 00:04:13.848 "read": true, 00:04:13.848 "write": true, 00:04:13.848 "unmap": true, 00:04:13.848 "flush": true, 00:04:13.848 "reset": true, 00:04:13.848 "nvme_admin": false, 00:04:13.848 "nvme_io": false, 00:04:13.848 "nvme_io_md": false, 00:04:13.848 "write_zeroes": true, 00:04:13.848 "zcopy": true, 00:04:13.848 "get_zone_info": false, 00:04:13.848 "zone_management": false, 00:04:13.848 "zone_append": false, 00:04:13.848 "compare": false, 00:04:13.848 "compare_and_write": false, 00:04:13.848 "abort": true, 00:04:13.848 "seek_hole": false, 00:04:13.848 "seek_data": false, 00:04:13.848 "copy": true, 00:04:13.848 "nvme_iov_md": false 00:04:13.848 }, 00:04:13.848 "memory_domains": [ 00:04:13.848 { 00:04:13.848 "dma_device_id": "system", 00:04:13.848 "dma_device_type": 1 00:04:13.848 }, 00:04:13.848 { 00:04:13.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.848 "dma_device_type": 2 00:04:13.848 } 00:04:13.848 ], 00:04:13.848 "driver_specific": {} 00:04:13.848 } 00:04:13.848 ]' 00:04:13.848 18:09:26 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:13.848 18:09:26 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:13.848 18:09:26 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:13.848 18:09:26 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.848 18:09:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:13.848 18:09:26 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.848 18:09:26 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:13.848 18:09:26 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.848 18:09:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:13.848 18:09:26 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.848 18:09:26 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:13.848 18:09:26 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:13.848 18:09:26 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:13.848 00:04:13.848 real 0m0.137s 00:04:13.848 user 0m0.080s 00:04:13.848 sys 0m0.025s 00:04:13.848 18:09:26 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:13.848 18:09:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:13.848 ************************************ 00:04:13.848 END TEST rpc_plugins 00:04:13.848 ************************************ 00:04:13.848 18:09:27 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:13.848 18:09:27 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:13.848 18:09:27 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:13.848 18:09:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.107 ************************************ 00:04:14.107 START TEST rpc_trace_cmd_test 00:04:14.107 ************************************ 00:04:14.107 18:09:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:14.107 18:09:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:14.107 18:09:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:14.107 18:09:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.107 18:09:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:14.107 18:09:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.107 18:09:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:14.107 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3262417", 00:04:14.107 "tpoint_group_mask": "0x8", 00:04:14.107 "iscsi_conn": { 00:04:14.107 "mask": "0x2", 00:04:14.107 "tpoint_mask": "0x0" 00:04:14.107 }, 00:04:14.107 "scsi": { 00:04:14.107 "mask": "0x4", 00:04:14.107 "tpoint_mask": "0x0" 00:04:14.107 }, 00:04:14.107 "bdev": { 00:04:14.107 "mask": "0x8", 00:04:14.107 "tpoint_mask": "0xffffffffffffffff" 00:04:14.107 }, 00:04:14.107 "nvmf_rdma": { 00:04:14.107 "mask": "0x10", 00:04:14.107 "tpoint_mask": "0x0" 00:04:14.107 }, 00:04:14.107 "nvmf_tcp": { 00:04:14.107 "mask": "0x20", 00:04:14.107 "tpoint_mask": "0x0" 00:04:14.107 }, 00:04:14.107 "ftl": { 00:04:14.107 "mask": "0x40", 00:04:14.107 "tpoint_mask": "0x0" 00:04:14.107 }, 00:04:14.107 "blobfs": { 00:04:14.107 "mask": "0x80", 00:04:14.107 "tpoint_mask": "0x0" 00:04:14.107 }, 00:04:14.107 "dsa": { 00:04:14.107 "mask": "0x200", 00:04:14.107 "tpoint_mask": "0x0" 00:04:14.107 }, 00:04:14.107 "thread": { 00:04:14.107 "mask": "0x400", 00:04:14.107 "tpoint_mask": "0x0" 00:04:14.107 }, 00:04:14.107 "nvme_pcie": { 00:04:14.107 "mask": "0x800", 00:04:14.107 "tpoint_mask": "0x0" 00:04:14.107 }, 00:04:14.107 "iaa": { 00:04:14.107 "mask": "0x1000", 00:04:14.107 "tpoint_mask": "0x0" 00:04:14.107 }, 00:04:14.107 "nvme_tcp": { 00:04:14.107 "mask": "0x2000", 00:04:14.107 "tpoint_mask": "0x0" 00:04:14.107 }, 00:04:14.107 "bdev_nvme": { 00:04:14.107 "mask": "0x4000", 00:04:14.107 "tpoint_mask": "0x0" 00:04:14.107 }, 00:04:14.107 "sock": { 00:04:14.107 "mask": "0x8000", 00:04:14.107 "tpoint_mask": "0x0" 00:04:14.107 }, 00:04:14.107 "blob": { 00:04:14.107 "mask": "0x10000", 00:04:14.107 "tpoint_mask": "0x0" 00:04:14.107 }, 00:04:14.107 "bdev_raid": { 00:04:14.107 "mask": "0x20000", 00:04:14.107 "tpoint_mask": "0x0" 00:04:14.107 }, 00:04:14.107 "scheduler": { 00:04:14.107 "mask": "0x40000", 00:04:14.107 "tpoint_mask": "0x0" 00:04:14.107 } 00:04:14.107 }' 00:04:14.107 18:09:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:14.107 18:09:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:14.107 18:09:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:14.107 18:09:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:14.107 18:09:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:14.107 18:09:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:14.107 18:09:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:14.107 18:09:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:14.107 18:09:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:14.107 18:09:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:14.107 00:04:14.107 real 0m0.207s 00:04:14.107 user 0m0.160s 00:04:14.107 sys 0m0.035s 00:04:14.107 18:09:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:14.107 18:09:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:14.107 ************************************ 00:04:14.107 END TEST rpc_trace_cmd_test 00:04:14.107 ************************************ 00:04:14.367 18:09:27 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:14.367 18:09:27 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:14.367 18:09:27 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:14.367 18:09:27 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:14.367 18:09:27 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:14.367 18:09:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.367 ************************************ 00:04:14.367 START TEST rpc_daemon_integrity 00:04:14.367 ************************************ 00:04:14.367 18:09:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:14.367 18:09:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:14.367 18:09:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.367 18:09:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.367 18:09:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.367 18:09:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:14.367 18:09:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:14.367 18:09:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:14.367 18:09:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:14.367 18:09:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.367 18:09:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.367 18:09:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.367 18:09:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:14.367 18:09:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:14.367 18:09:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.367 18:09:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.367 18:09:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.367 18:09:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:14.367 { 00:04:14.367 "name": "Malloc2", 00:04:14.367 "aliases": [ 00:04:14.367 "bfb214e5-605c-4e16-bafa-3f236df40782" 00:04:14.367 ], 00:04:14.367 "product_name": "Malloc disk", 00:04:14.367 "block_size": 512, 00:04:14.367 "num_blocks": 16384, 00:04:14.367 "uuid": "bfb214e5-605c-4e16-bafa-3f236df40782", 00:04:14.367 "assigned_rate_limits": { 00:04:14.367 "rw_ios_per_sec": 0, 00:04:14.367 "rw_mbytes_per_sec": 0, 00:04:14.367 "r_mbytes_per_sec": 0, 00:04:14.367 "w_mbytes_per_sec": 0 00:04:14.367 }, 00:04:14.367 "claimed": false, 00:04:14.367 "zoned": false, 00:04:14.367 "supported_io_types": { 00:04:14.367 "read": true, 00:04:14.367 "write": true, 00:04:14.367 "unmap": true, 00:04:14.367 "flush": true, 00:04:14.367 "reset": true, 00:04:14.367 "nvme_admin": false, 00:04:14.367 "nvme_io": false, 00:04:14.367 "nvme_io_md": false, 00:04:14.367 "write_zeroes": true, 00:04:14.367 "zcopy": true, 00:04:14.367 "get_zone_info": false, 00:04:14.367 "zone_management": false, 00:04:14.367 "zone_append": false, 00:04:14.367 "compare": false, 00:04:14.367 "compare_and_write": false, 00:04:14.367 "abort": true, 00:04:14.367 "seek_hole": false, 00:04:14.367 "seek_data": false, 00:04:14.367 "copy": true, 00:04:14.367 "nvme_iov_md": false 00:04:14.367 }, 00:04:14.367 "memory_domains": [ 00:04:14.367 { 00:04:14.367 "dma_device_id": "system", 00:04:14.367 "dma_device_type": 1 00:04:14.367 }, 00:04:14.367 { 00:04:14.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.367 "dma_device_type": 2 00:04:14.367 } 00:04:14.367 ], 00:04:14.367 "driver_specific": {} 00:04:14.367 } 00:04:14.367 ]' 00:04:14.367 18:09:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:14.367 18:09:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:14.367 18:09:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:14.367 18:09:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.367 18:09:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.367 [2024-10-08 18:09:27.485678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:14.367 [2024-10-08 18:09:27.485708] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:14.367 [2024-10-08 18:09:27.485723] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbe53d0 00:04:14.367 [2024-10-08 18:09:27.485731] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:14.367 [2024-10-08 18:09:27.486750] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:14.367 [2024-10-08 18:09:27.486773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:14.367 Passthru0 00:04:14.367 18:09:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.367 18:09:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:14.367 18:09:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.367 18:09:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.367 18:09:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.367 18:09:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:14.367 { 00:04:14.367 "name": "Malloc2", 00:04:14.367 "aliases": [ 00:04:14.367 "bfb214e5-605c-4e16-bafa-3f236df40782" 00:04:14.367 ], 00:04:14.367 "product_name": "Malloc disk", 00:04:14.367 "block_size": 512, 00:04:14.367 "num_blocks": 16384, 00:04:14.367 "uuid": "bfb214e5-605c-4e16-bafa-3f236df40782", 00:04:14.367 "assigned_rate_limits": { 00:04:14.367 "rw_ios_per_sec": 0, 00:04:14.367 "rw_mbytes_per_sec": 0, 00:04:14.367 "r_mbytes_per_sec": 0, 00:04:14.367 "w_mbytes_per_sec": 0 00:04:14.367 }, 00:04:14.367 "claimed": true, 00:04:14.367 "claim_type": "exclusive_write", 00:04:14.367 "zoned": false, 00:04:14.367 "supported_io_types": { 00:04:14.367 "read": true, 00:04:14.367 "write": true, 00:04:14.367 "unmap": true, 00:04:14.367 "flush": true, 00:04:14.367 "reset": true, 00:04:14.367 "nvme_admin": false, 00:04:14.367 "nvme_io": false, 00:04:14.367 "nvme_io_md": false, 00:04:14.367 "write_zeroes": true, 00:04:14.367 "zcopy": true, 00:04:14.367 "get_zone_info": false, 00:04:14.367 "zone_management": false, 00:04:14.367 "zone_append": false, 00:04:14.367 "compare": false, 00:04:14.367 "compare_and_write": false, 00:04:14.367 "abort": true, 00:04:14.367 "seek_hole": false, 00:04:14.367 "seek_data": false, 00:04:14.367 "copy": true, 00:04:14.367 "nvme_iov_md": false 00:04:14.367 }, 00:04:14.367 "memory_domains": [ 00:04:14.367 { 00:04:14.367 "dma_device_id": "system", 00:04:14.367 "dma_device_type": 1 00:04:14.367 }, 00:04:14.367 { 00:04:14.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.367 "dma_device_type": 2 00:04:14.367 } 00:04:14.367 ], 00:04:14.367 "driver_specific": {} 00:04:14.367 }, 00:04:14.367 { 00:04:14.367 "name": "Passthru0", 00:04:14.367 "aliases": [ 00:04:14.367 "3bab9af8-6924-5056-9165-575076f833af" 00:04:14.367 ], 00:04:14.367 "product_name": "passthru", 00:04:14.367 "block_size": 512, 00:04:14.367 "num_blocks": 16384, 00:04:14.367 "uuid": "3bab9af8-6924-5056-9165-575076f833af", 00:04:14.367 "assigned_rate_limits": { 00:04:14.367 "rw_ios_per_sec": 0, 00:04:14.367 "rw_mbytes_per_sec": 0, 00:04:14.367 "r_mbytes_per_sec": 0, 00:04:14.367 "w_mbytes_per_sec": 0 00:04:14.367 }, 00:04:14.367 "claimed": false, 00:04:14.367 "zoned": false, 00:04:14.367 "supported_io_types": { 00:04:14.367 "read": true, 00:04:14.367 "write": true, 00:04:14.367 "unmap": true, 00:04:14.367 "flush": true, 00:04:14.367 "reset": true, 00:04:14.367 "nvme_admin": false, 00:04:14.367 "nvme_io": false, 00:04:14.367 "nvme_io_md": false, 00:04:14.367 "write_zeroes": true, 00:04:14.367 "zcopy": true, 00:04:14.367 "get_zone_info": false, 00:04:14.367 "zone_management": false, 00:04:14.367 "zone_append": false, 00:04:14.367 "compare": false, 00:04:14.367 "compare_and_write": false, 00:04:14.367 "abort": true, 00:04:14.367 "seek_hole": false, 00:04:14.367 "seek_data": false, 00:04:14.367 "copy": true, 00:04:14.367 "nvme_iov_md": false 00:04:14.367 }, 00:04:14.367 "memory_domains": [ 00:04:14.367 { 00:04:14.367 "dma_device_id": "system", 00:04:14.367 "dma_device_type": 1 00:04:14.367 }, 00:04:14.367 { 00:04:14.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.367 "dma_device_type": 2 00:04:14.367 } 00:04:14.368 ], 00:04:14.368 "driver_specific": { 00:04:14.368 "passthru": { 00:04:14.368 "name": "Passthru0", 00:04:14.368 "base_bdev_name": "Malloc2" 00:04:14.368 } 00:04:14.368 } 00:04:14.368 } 00:04:14.368 ]' 00:04:14.368 18:09:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:14.627 18:09:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:14.627 18:09:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:14.627 18:09:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.627 18:09:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.627 18:09:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.627 18:09:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:14.627 18:09:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.627 18:09:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.627 18:09:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.627 18:09:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:14.627 18:09:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.627 18:09:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.627 18:09:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.627 18:09:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:14.627 18:09:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:14.627 18:09:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:14.627 00:04:14.627 real 0m0.296s 00:04:14.627 user 0m0.170s 00:04:14.627 sys 0m0.062s 00:04:14.627 18:09:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:14.627 18:09:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.627 ************************************ 00:04:14.627 END TEST rpc_daemon_integrity 00:04:14.627 ************************************ 00:04:14.627 18:09:27 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:14.627 18:09:27 rpc -- rpc/rpc.sh@84 -- # killprocess 3262417 00:04:14.627 18:09:27 rpc -- common/autotest_common.sh@950 -- # '[' -z 3262417 ']' 00:04:14.627 18:09:27 rpc -- common/autotest_common.sh@954 -- # kill -0 3262417 00:04:14.627 18:09:27 rpc -- common/autotest_common.sh@955 -- # uname 00:04:14.627 18:09:27 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:14.627 18:09:27 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3262417 00:04:14.627 18:09:27 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:14.627 18:09:27 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:14.627 18:09:27 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3262417' 00:04:14.627 killing process with pid 3262417 00:04:14.627 18:09:27 rpc -- common/autotest_common.sh@969 -- # kill 3262417 00:04:14.627 18:09:27 rpc -- common/autotest_common.sh@974 -- # wait 3262417 00:04:15.194 00:04:15.194 real 0m2.748s 00:04:15.194 user 0m3.355s 00:04:15.194 sys 0m0.914s 00:04:15.194 18:09:28 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.194 18:09:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.194 ************************************ 00:04:15.194 END TEST rpc 00:04:15.194 ************************************ 00:04:15.194 18:09:28 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:15.194 18:09:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:15.194 18:09:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:15.194 18:09:28 -- common/autotest_common.sh@10 -- # set +x 00:04:15.194 ************************************ 00:04:15.194 START TEST skip_rpc 00:04:15.194 ************************************ 00:04:15.194 18:09:28 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:15.194 * Looking for test storage... 00:04:15.194 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:15.194 18:09:28 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:15.194 18:09:28 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:15.194 18:09:28 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:15.453 18:09:28 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:15.453 18:09:28 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.453 18:09:28 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.453 18:09:28 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.453 18:09:28 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.453 18:09:28 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.453 18:09:28 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.453 18:09:28 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.453 18:09:28 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.453 18:09:28 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.453 18:09:28 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.453 18:09:28 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.453 18:09:28 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:15.453 18:09:28 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:15.453 18:09:28 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.453 18:09:28 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.453 18:09:28 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:15.453 18:09:28 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:15.453 18:09:28 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.453 18:09:28 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:15.453 18:09:28 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.453 18:09:28 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:15.453 18:09:28 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:15.453 18:09:28 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.453 18:09:28 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:15.453 18:09:28 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.453 18:09:28 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.453 18:09:28 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.453 18:09:28 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:15.453 18:09:28 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.453 18:09:28 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:15.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.453 --rc genhtml_branch_coverage=1 00:04:15.453 --rc genhtml_function_coverage=1 00:04:15.453 --rc genhtml_legend=1 00:04:15.453 --rc geninfo_all_blocks=1 00:04:15.453 --rc geninfo_unexecuted_blocks=1 00:04:15.453 00:04:15.453 ' 00:04:15.453 18:09:28 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:15.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.453 --rc genhtml_branch_coverage=1 00:04:15.453 --rc genhtml_function_coverage=1 00:04:15.453 --rc genhtml_legend=1 00:04:15.453 --rc geninfo_all_blocks=1 00:04:15.453 --rc geninfo_unexecuted_blocks=1 00:04:15.453 00:04:15.453 ' 00:04:15.453 18:09:28 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:15.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.453 --rc genhtml_branch_coverage=1 00:04:15.453 --rc genhtml_function_coverage=1 00:04:15.453 --rc genhtml_legend=1 00:04:15.453 --rc geninfo_all_blocks=1 00:04:15.453 --rc geninfo_unexecuted_blocks=1 00:04:15.453 00:04:15.453 ' 00:04:15.453 18:09:28 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:15.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.453 --rc genhtml_branch_coverage=1 00:04:15.453 --rc genhtml_function_coverage=1 00:04:15.453 --rc genhtml_legend=1 00:04:15.453 --rc geninfo_all_blocks=1 00:04:15.453 --rc geninfo_unexecuted_blocks=1 00:04:15.453 00:04:15.453 ' 00:04:15.453 18:09:28 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:15.453 18:09:28 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:15.453 18:09:28 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:15.453 18:09:28 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:15.453 18:09:28 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:15.453 18:09:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.453 ************************************ 00:04:15.453 START TEST skip_rpc 00:04:15.453 ************************************ 00:04:15.453 18:09:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:15.453 18:09:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3262972 00:04:15.453 18:09:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:15.453 18:09:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:15.453 18:09:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:15.453 [2024-10-08 18:09:28.483114] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:04:15.453 [2024-10-08 18:09:28.483161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3262972 ] 00:04:15.453 [2024-10-08 18:09:28.566307] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.712 [2024-10-08 18:09:28.652915] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.978 18:09:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:20.978 18:09:33 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:20.978 18:09:33 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:20.978 18:09:33 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:20.978 18:09:33 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:20.978 18:09:33 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:20.978 18:09:33 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:20.978 18:09:33 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:20.978 18:09:33 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.978 18:09:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.978 18:09:33 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:20.978 18:09:33 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:20.978 18:09:33 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:20.978 18:09:33 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:20.978 18:09:33 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:20.978 18:09:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:20.978 18:09:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3262972 00:04:20.978 18:09:33 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 3262972 ']' 00:04:20.978 18:09:33 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 3262972 00:04:20.978 18:09:33 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:20.978 18:09:33 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:20.978 18:09:33 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3262972 00:04:20.978 18:09:33 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:20.978 18:09:33 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:20.978 18:09:33 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3262972' 00:04:20.978 killing process with pid 3262972 00:04:20.978 18:09:33 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 3262972 00:04:20.978 18:09:33 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 3262972 00:04:20.978 00:04:20.978 real 0m5.453s 00:04:20.978 user 0m5.157s 00:04:20.978 sys 0m0.341s 00:04:20.978 18:09:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:20.978 18:09:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.978 ************************************ 00:04:20.978 END TEST skip_rpc 00:04:20.978 ************************************ 00:04:20.978 18:09:33 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:20.978 18:09:33 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:20.978 18:09:33 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:20.978 18:09:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.978 ************************************ 00:04:20.978 START TEST skip_rpc_with_json 00:04:20.978 ************************************ 00:04:20.978 18:09:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:20.978 18:09:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:20.978 18:09:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3263726 00:04:20.978 18:09:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:20.978 18:09:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:20.978 18:09:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3263726 00:04:20.978 18:09:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 3263726 ']' 00:04:20.978 18:09:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.978 18:09:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:20.978 18:09:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.978 18:09:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:20.978 18:09:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:20.978 [2024-10-08 18:09:34.028683] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:04:20.978 [2024-10-08 18:09:34.028743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3263726 ] 00:04:20.978 [2024-10-08 18:09:34.111667] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.237 [2024-10-08 18:09:34.201916] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.804 18:09:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:21.804 18:09:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:21.804 18:09:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:21.804 18:09:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:21.804 18:09:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:21.804 [2024-10-08 18:09:34.878922] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:21.804 request: 00:04:21.804 { 00:04:21.804 "trtype": "tcp", 00:04:21.804 "method": "nvmf_get_transports", 00:04:21.804 "req_id": 1 00:04:21.804 } 00:04:21.804 Got JSON-RPC error response 00:04:21.804 response: 00:04:21.804 { 00:04:21.804 "code": -19, 00:04:21.804 "message": "No such device" 00:04:21.804 } 00:04:21.804 18:09:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:21.804 18:09:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:21.804 18:09:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:21.804 18:09:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:21.804 [2024-10-08 18:09:34.891026] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:21.804 18:09:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:21.804 18:09:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:21.804 18:09:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:21.804 18:09:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:22.064 18:09:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.064 18:09:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:22.064 { 00:04:22.064 "subsystems": [ 00:04:22.064 { 00:04:22.064 "subsystem": "fsdev", 00:04:22.064 "config": [ 00:04:22.064 { 00:04:22.064 "method": "fsdev_set_opts", 00:04:22.064 "params": { 00:04:22.064 "fsdev_io_pool_size": 65535, 00:04:22.064 "fsdev_io_cache_size": 256 00:04:22.064 } 00:04:22.064 } 00:04:22.064 ] 00:04:22.064 }, 00:04:22.064 { 00:04:22.064 "subsystem": "keyring", 00:04:22.064 "config": [] 00:04:22.064 }, 00:04:22.064 { 00:04:22.064 "subsystem": "iobuf", 00:04:22.064 "config": [ 00:04:22.064 { 00:04:22.064 "method": "iobuf_set_options", 00:04:22.064 "params": { 00:04:22.064 "small_pool_count": 8192, 00:04:22.064 "large_pool_count": 1024, 00:04:22.064 "small_bufsize": 8192, 00:04:22.064 "large_bufsize": 135168 00:04:22.064 } 00:04:22.064 } 00:04:22.064 ] 00:04:22.064 }, 00:04:22.064 { 00:04:22.064 "subsystem": "sock", 00:04:22.064 "config": [ 00:04:22.064 { 00:04:22.064 "method": "sock_set_default_impl", 00:04:22.064 "params": { 00:04:22.064 "impl_name": "posix" 00:04:22.064 } 00:04:22.064 }, 00:04:22.064 { 00:04:22.064 "method": "sock_impl_set_options", 00:04:22.064 "params": { 00:04:22.064 "impl_name": "ssl", 00:04:22.064 "recv_buf_size": 4096, 00:04:22.064 "send_buf_size": 4096, 00:04:22.064 "enable_recv_pipe": true, 00:04:22.064 "enable_quickack": false, 00:04:22.064 "enable_placement_id": 0, 00:04:22.064 "enable_zerocopy_send_server": true, 00:04:22.064 "enable_zerocopy_send_client": false, 00:04:22.064 "zerocopy_threshold": 0, 00:04:22.064 "tls_version": 0, 00:04:22.064 "enable_ktls": false 00:04:22.064 } 00:04:22.064 }, 00:04:22.064 { 00:04:22.064 "method": "sock_impl_set_options", 00:04:22.064 "params": { 00:04:22.064 "impl_name": "posix", 00:04:22.064 "recv_buf_size": 2097152, 00:04:22.064 "send_buf_size": 2097152, 00:04:22.064 "enable_recv_pipe": true, 00:04:22.064 "enable_quickack": false, 00:04:22.064 "enable_placement_id": 0, 00:04:22.064 "enable_zerocopy_send_server": true, 00:04:22.064 "enable_zerocopy_send_client": false, 00:04:22.064 "zerocopy_threshold": 0, 00:04:22.064 "tls_version": 0, 00:04:22.064 "enable_ktls": false 00:04:22.064 } 00:04:22.064 } 00:04:22.064 ] 00:04:22.064 }, 00:04:22.064 { 00:04:22.064 "subsystem": "vmd", 00:04:22.064 "config": [] 00:04:22.064 }, 00:04:22.064 { 00:04:22.064 "subsystem": "accel", 00:04:22.064 "config": [ 00:04:22.064 { 00:04:22.064 "method": "accel_set_options", 00:04:22.064 "params": { 00:04:22.064 "small_cache_size": 128, 00:04:22.064 "large_cache_size": 16, 00:04:22.064 "task_count": 2048, 00:04:22.064 "sequence_count": 2048, 00:04:22.064 "buf_count": 2048 00:04:22.064 } 00:04:22.064 } 00:04:22.064 ] 00:04:22.064 }, 00:04:22.064 { 00:04:22.064 "subsystem": "bdev", 00:04:22.064 "config": [ 00:04:22.064 { 00:04:22.064 "method": "bdev_set_options", 00:04:22.064 "params": { 00:04:22.064 "bdev_io_pool_size": 65535, 00:04:22.064 "bdev_io_cache_size": 256, 00:04:22.064 "bdev_auto_examine": true, 00:04:22.064 "iobuf_small_cache_size": 128, 00:04:22.064 "iobuf_large_cache_size": 16 00:04:22.064 } 00:04:22.064 }, 00:04:22.064 { 00:04:22.064 "method": "bdev_raid_set_options", 00:04:22.064 "params": { 00:04:22.064 "process_window_size_kb": 1024, 00:04:22.064 "process_max_bandwidth_mb_sec": 0 00:04:22.064 } 00:04:22.064 }, 00:04:22.064 { 00:04:22.064 "method": "bdev_iscsi_set_options", 00:04:22.064 "params": { 00:04:22.064 "timeout_sec": 30 00:04:22.064 } 00:04:22.064 }, 00:04:22.064 { 00:04:22.064 "method": "bdev_nvme_set_options", 00:04:22.064 "params": { 00:04:22.064 "action_on_timeout": "none", 00:04:22.064 "timeout_us": 0, 00:04:22.064 "timeout_admin_us": 0, 00:04:22.064 "keep_alive_timeout_ms": 10000, 00:04:22.064 "arbitration_burst": 0, 00:04:22.064 "low_priority_weight": 0, 00:04:22.064 "medium_priority_weight": 0, 00:04:22.064 "high_priority_weight": 0, 00:04:22.064 "nvme_adminq_poll_period_us": 10000, 00:04:22.064 "nvme_ioq_poll_period_us": 0, 00:04:22.064 "io_queue_requests": 0, 00:04:22.064 "delay_cmd_submit": true, 00:04:22.064 "transport_retry_count": 4, 00:04:22.064 "bdev_retry_count": 3, 00:04:22.064 "transport_ack_timeout": 0, 00:04:22.064 "ctrlr_loss_timeout_sec": 0, 00:04:22.064 "reconnect_delay_sec": 0, 00:04:22.064 "fast_io_fail_timeout_sec": 0, 00:04:22.064 "disable_auto_failback": false, 00:04:22.064 "generate_uuids": false, 00:04:22.064 "transport_tos": 0, 00:04:22.064 "nvme_error_stat": false, 00:04:22.064 "rdma_srq_size": 0, 00:04:22.064 "io_path_stat": false, 00:04:22.064 "allow_accel_sequence": false, 00:04:22.064 "rdma_max_cq_size": 0, 00:04:22.064 "rdma_cm_event_timeout_ms": 0, 00:04:22.064 "dhchap_digests": [ 00:04:22.064 "sha256", 00:04:22.064 "sha384", 00:04:22.064 "sha512" 00:04:22.064 ], 00:04:22.064 "dhchap_dhgroups": [ 00:04:22.064 "null", 00:04:22.064 "ffdhe2048", 00:04:22.064 "ffdhe3072", 00:04:22.064 "ffdhe4096", 00:04:22.064 "ffdhe6144", 00:04:22.064 "ffdhe8192" 00:04:22.064 ] 00:04:22.064 } 00:04:22.064 }, 00:04:22.064 { 00:04:22.064 "method": "bdev_nvme_set_hotplug", 00:04:22.064 "params": { 00:04:22.064 "period_us": 100000, 00:04:22.064 "enable": false 00:04:22.064 } 00:04:22.064 }, 00:04:22.064 { 00:04:22.064 "method": "bdev_wait_for_examine" 00:04:22.064 } 00:04:22.064 ] 00:04:22.064 }, 00:04:22.064 { 00:04:22.064 "subsystem": "scsi", 00:04:22.064 "config": null 00:04:22.064 }, 00:04:22.064 { 00:04:22.064 "subsystem": "scheduler", 00:04:22.064 "config": [ 00:04:22.064 { 00:04:22.064 "method": "framework_set_scheduler", 00:04:22.064 "params": { 00:04:22.064 "name": "static" 00:04:22.064 } 00:04:22.064 } 00:04:22.064 ] 00:04:22.064 }, 00:04:22.064 { 00:04:22.064 "subsystem": "vhost_scsi", 00:04:22.064 "config": [] 00:04:22.064 }, 00:04:22.064 { 00:04:22.064 "subsystem": "vhost_blk", 00:04:22.064 "config": [] 00:04:22.064 }, 00:04:22.064 { 00:04:22.064 "subsystem": "ublk", 00:04:22.064 "config": [] 00:04:22.064 }, 00:04:22.064 { 00:04:22.064 "subsystem": "nbd", 00:04:22.064 "config": [] 00:04:22.064 }, 00:04:22.064 { 00:04:22.064 "subsystem": "nvmf", 00:04:22.064 "config": [ 00:04:22.064 { 00:04:22.064 "method": "nvmf_set_config", 00:04:22.064 "params": { 00:04:22.064 "discovery_filter": "match_any", 00:04:22.064 "admin_cmd_passthru": { 00:04:22.064 "identify_ctrlr": false 00:04:22.064 }, 00:04:22.064 "dhchap_digests": [ 00:04:22.064 "sha256", 00:04:22.064 "sha384", 00:04:22.064 "sha512" 00:04:22.064 ], 00:04:22.064 "dhchap_dhgroups": [ 00:04:22.064 "null", 00:04:22.064 "ffdhe2048", 00:04:22.064 "ffdhe3072", 00:04:22.064 "ffdhe4096", 00:04:22.064 "ffdhe6144", 00:04:22.064 "ffdhe8192" 00:04:22.064 ] 00:04:22.064 } 00:04:22.064 }, 00:04:22.064 { 00:04:22.064 "method": "nvmf_set_max_subsystems", 00:04:22.064 "params": { 00:04:22.064 "max_subsystems": 1024 00:04:22.064 } 00:04:22.064 }, 00:04:22.064 { 00:04:22.064 "method": "nvmf_set_crdt", 00:04:22.064 "params": { 00:04:22.064 "crdt1": 0, 00:04:22.064 "crdt2": 0, 00:04:22.064 "crdt3": 0 00:04:22.064 } 00:04:22.064 }, 00:04:22.064 { 00:04:22.064 "method": "nvmf_create_transport", 00:04:22.064 "params": { 00:04:22.064 "trtype": "TCP", 00:04:22.064 "max_queue_depth": 128, 00:04:22.064 "max_io_qpairs_per_ctrlr": 127, 00:04:22.064 "in_capsule_data_size": 4096, 00:04:22.064 "max_io_size": 131072, 00:04:22.064 "io_unit_size": 131072, 00:04:22.064 "max_aq_depth": 128, 00:04:22.064 "num_shared_buffers": 511, 00:04:22.064 "buf_cache_size": 4294967295, 00:04:22.064 "dif_insert_or_strip": false, 00:04:22.064 "zcopy": false, 00:04:22.064 "c2h_success": true, 00:04:22.064 "sock_priority": 0, 00:04:22.064 "abort_timeout_sec": 1, 00:04:22.064 "ack_timeout": 0, 00:04:22.064 "data_wr_pool_size": 0 00:04:22.064 } 00:04:22.064 } 00:04:22.064 ] 00:04:22.064 }, 00:04:22.064 { 00:04:22.064 "subsystem": "iscsi", 00:04:22.064 "config": [ 00:04:22.064 { 00:04:22.064 "method": "iscsi_set_options", 00:04:22.064 "params": { 00:04:22.064 "node_base": "iqn.2016-06.io.spdk", 00:04:22.064 "max_sessions": 128, 00:04:22.064 "max_connections_per_session": 2, 00:04:22.064 "max_queue_depth": 64, 00:04:22.064 "default_time2wait": 2, 00:04:22.064 "default_time2retain": 20, 00:04:22.064 "first_burst_length": 8192, 00:04:22.064 "immediate_data": true, 00:04:22.064 "allow_duplicated_isid": false, 00:04:22.064 "error_recovery_level": 0, 00:04:22.064 "nop_timeout": 60, 00:04:22.064 "nop_in_interval": 30, 00:04:22.064 "disable_chap": false, 00:04:22.064 "require_chap": false, 00:04:22.064 "mutual_chap": false, 00:04:22.064 "chap_group": 0, 00:04:22.064 "max_large_datain_per_connection": 64, 00:04:22.064 "max_r2t_per_connection": 4, 00:04:22.065 "pdu_pool_size": 36864, 00:04:22.065 "immediate_data_pool_size": 16384, 00:04:22.065 "data_out_pool_size": 2048 00:04:22.065 } 00:04:22.065 } 00:04:22.065 ] 00:04:22.065 } 00:04:22.065 ] 00:04:22.065 } 00:04:22.065 18:09:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:22.065 18:09:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3263726 00:04:22.065 18:09:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 3263726 ']' 00:04:22.065 18:09:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 3263726 00:04:22.065 18:09:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:22.065 18:09:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:22.065 18:09:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3263726 00:04:22.065 18:09:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:22.065 18:09:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:22.065 18:09:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3263726' 00:04:22.065 killing process with pid 3263726 00:04:22.065 18:09:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 3263726 00:04:22.065 18:09:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 3263726 00:04:22.633 18:09:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3263938 00:04:22.633 18:09:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:22.633 18:09:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:27.912 18:09:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3263938 00:04:27.912 18:09:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 3263938 ']' 00:04:27.912 18:09:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 3263938 00:04:27.912 18:09:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:27.912 18:09:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:27.912 18:09:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3263938 00:04:27.912 18:09:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:27.912 18:09:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:27.912 18:09:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3263938' 00:04:27.912 killing process with pid 3263938 00:04:27.912 18:09:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 3263938 00:04:27.912 18:09:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 3263938 00:04:27.912 18:09:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:27.912 18:09:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:27.912 00:04:27.912 real 0m6.977s 00:04:27.912 user 0m6.723s 00:04:27.912 sys 0m0.766s 00:04:27.912 18:09:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:27.912 18:09:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:27.912 ************************************ 00:04:27.912 END TEST skip_rpc_with_json 00:04:27.912 ************************************ 00:04:27.912 18:09:40 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:27.912 18:09:40 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:27.912 18:09:40 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.912 18:09:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.912 ************************************ 00:04:27.912 START TEST skip_rpc_with_delay 00:04:27.912 ************************************ 00:04:27.912 18:09:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:27.912 18:09:41 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:27.912 18:09:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:27.912 18:09:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:27.912 18:09:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.912 18:09:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:27.912 18:09:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.912 18:09:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:27.912 18:09:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.912 18:09:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:27.912 18:09:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.912 18:09:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:27.912 18:09:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:28.261 [2024-10-08 18:09:41.087120] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:28.261 [2024-10-08 18:09:41.087194] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:28.261 18:09:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:28.261 18:09:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:28.261 18:09:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:28.261 18:09:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:28.261 00:04:28.261 real 0m0.078s 00:04:28.261 user 0m0.046s 00:04:28.261 sys 0m0.032s 00:04:28.261 18:09:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:28.261 18:09:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:28.261 ************************************ 00:04:28.261 END TEST skip_rpc_with_delay 00:04:28.261 ************************************ 00:04:28.261 18:09:41 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:28.261 18:09:41 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:28.261 18:09:41 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:28.261 18:09:41 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:28.261 18:09:41 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:28.261 18:09:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.261 ************************************ 00:04:28.261 START TEST exit_on_failed_rpc_init 00:04:28.261 ************************************ 00:04:28.261 18:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:28.261 18:09:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3264810 00:04:28.261 18:09:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:28.261 18:09:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3264810 00:04:28.261 18:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 3264810 ']' 00:04:28.261 18:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.261 18:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:28.261 18:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.261 18:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:28.261 18:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:28.261 [2024-10-08 18:09:41.248088] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:04:28.261 [2024-10-08 18:09:41.248145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3264810 ] 00:04:28.261 [2024-10-08 18:09:41.316122] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.261 [2024-10-08 18:09:41.410037] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.226 18:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:29.226 18:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:29.226 18:09:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.226 18:09:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:29.226 18:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:29.226 18:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:29.226 18:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.226 18:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:29.226 18:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.226 18:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:29.226 18:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.226 18:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:29.226 18:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.226 18:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:29.226 18:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:29.226 [2024-10-08 18:09:42.144158] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:04:29.226 [2024-10-08 18:09:42.144213] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3264893 ] 00:04:29.226 [2024-10-08 18:09:42.229488] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.226 [2024-10-08 18:09:42.313979] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:29.226 [2024-10-08 18:09:42.314078] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:29.226 [2024-10-08 18:09:42.314094] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:29.226 [2024-10-08 18:09:42.314103] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:29.226 18:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:29.485 18:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:29.485 18:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:29.485 18:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:29.485 18:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:29.485 18:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:29.485 18:09:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:29.485 18:09:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3264810 00:04:29.485 18:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 3264810 ']' 00:04:29.485 18:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 3264810 00:04:29.485 18:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:29.485 18:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:29.485 18:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3264810 00:04:29.485 18:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:29.485 18:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:29.485 18:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3264810' 00:04:29.485 killing process with pid 3264810 00:04:29.485 18:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 3264810 00:04:29.485 18:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 3264810 00:04:29.744 00:04:29.744 real 0m1.645s 00:04:29.744 user 0m1.874s 00:04:29.744 sys 0m0.513s 00:04:29.744 18:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:29.744 18:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:29.744 ************************************ 00:04:29.744 END TEST exit_on_failed_rpc_init 00:04:29.744 ************************************ 00:04:29.744 18:09:42 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:29.744 00:04:29.744 real 0m14.686s 00:04:29.744 user 0m14.020s 00:04:29.744 sys 0m2.006s 00:04:29.744 18:09:42 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:29.744 18:09:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.744 ************************************ 00:04:29.744 END TEST skip_rpc 00:04:29.744 ************************************ 00:04:30.003 18:09:42 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:30.003 18:09:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.003 18:09:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.003 18:09:42 -- common/autotest_common.sh@10 -- # set +x 00:04:30.003 ************************************ 00:04:30.003 START TEST rpc_client 00:04:30.003 ************************************ 00:04:30.004 18:09:42 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:30.004 * Looking for test storage... 00:04:30.004 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:04:30.004 18:09:43 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:30.004 18:09:43 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:04:30.004 18:09:43 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:30.004 18:09:43 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:30.004 18:09:43 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.004 18:09:43 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.004 18:09:43 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.004 18:09:43 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.004 18:09:43 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.004 18:09:43 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.004 18:09:43 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.004 18:09:43 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.004 18:09:43 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.004 18:09:43 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.004 18:09:43 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.004 18:09:43 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:30.004 18:09:43 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:30.004 18:09:43 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.004 18:09:43 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.004 18:09:43 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:30.004 18:09:43 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:30.004 18:09:43 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.004 18:09:43 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:30.004 18:09:43 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.004 18:09:43 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:30.004 18:09:43 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:30.004 18:09:43 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.004 18:09:43 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:30.004 18:09:43 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.004 18:09:43 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.004 18:09:43 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.004 18:09:43 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:30.004 18:09:43 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.004 18:09:43 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:30.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.004 --rc genhtml_branch_coverage=1 00:04:30.004 --rc genhtml_function_coverage=1 00:04:30.004 --rc genhtml_legend=1 00:04:30.004 --rc geninfo_all_blocks=1 00:04:30.004 --rc geninfo_unexecuted_blocks=1 00:04:30.004 00:04:30.004 ' 00:04:30.004 18:09:43 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:30.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.004 --rc genhtml_branch_coverage=1 00:04:30.004 --rc genhtml_function_coverage=1 00:04:30.004 --rc genhtml_legend=1 00:04:30.004 --rc geninfo_all_blocks=1 00:04:30.004 --rc geninfo_unexecuted_blocks=1 00:04:30.004 00:04:30.004 ' 00:04:30.004 18:09:43 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:30.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.004 --rc genhtml_branch_coverage=1 00:04:30.004 --rc genhtml_function_coverage=1 00:04:30.004 --rc genhtml_legend=1 00:04:30.004 --rc geninfo_all_blocks=1 00:04:30.004 --rc geninfo_unexecuted_blocks=1 00:04:30.004 00:04:30.004 ' 00:04:30.004 18:09:43 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:30.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.004 --rc genhtml_branch_coverage=1 00:04:30.004 --rc genhtml_function_coverage=1 00:04:30.004 --rc genhtml_legend=1 00:04:30.004 --rc geninfo_all_blocks=1 00:04:30.004 --rc geninfo_unexecuted_blocks=1 00:04:30.004 00:04:30.004 ' 00:04:30.004 18:09:43 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:30.264 OK 00:04:30.264 18:09:43 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:30.264 00:04:30.264 real 0m0.226s 00:04:30.264 user 0m0.112s 00:04:30.264 sys 0m0.131s 00:04:30.264 18:09:43 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:30.264 18:09:43 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:30.264 ************************************ 00:04:30.264 END TEST rpc_client 00:04:30.264 ************************************ 00:04:30.264 18:09:43 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:04:30.264 18:09:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.264 18:09:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.264 18:09:43 -- common/autotest_common.sh@10 -- # set +x 00:04:30.264 ************************************ 00:04:30.264 START TEST json_config 00:04:30.264 ************************************ 00:04:30.264 18:09:43 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:04:30.264 18:09:43 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:30.264 18:09:43 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:04:30.264 18:09:43 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:30.264 18:09:43 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:30.264 18:09:43 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.264 18:09:43 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.264 18:09:43 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.264 18:09:43 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.264 18:09:43 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.264 18:09:43 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.264 18:09:43 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.264 18:09:43 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.264 18:09:43 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.264 18:09:43 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.264 18:09:43 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.264 18:09:43 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:30.264 18:09:43 json_config -- scripts/common.sh@345 -- # : 1 00:04:30.264 18:09:43 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.264 18:09:43 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.524 18:09:43 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:30.524 18:09:43 json_config -- scripts/common.sh@353 -- # local d=1 00:04:30.524 18:09:43 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.524 18:09:43 json_config -- scripts/common.sh@355 -- # echo 1 00:04:30.524 18:09:43 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.524 18:09:43 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:30.524 18:09:43 json_config -- scripts/common.sh@353 -- # local d=2 00:04:30.524 18:09:43 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.524 18:09:43 json_config -- scripts/common.sh@355 -- # echo 2 00:04:30.524 18:09:43 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.524 18:09:43 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.524 18:09:43 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.524 18:09:43 json_config -- scripts/common.sh@368 -- # return 0 00:04:30.524 18:09:43 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.524 18:09:43 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:30.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.524 --rc genhtml_branch_coverage=1 00:04:30.524 --rc genhtml_function_coverage=1 00:04:30.524 --rc genhtml_legend=1 00:04:30.524 --rc geninfo_all_blocks=1 00:04:30.524 --rc geninfo_unexecuted_blocks=1 00:04:30.524 00:04:30.524 ' 00:04:30.524 18:09:43 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:30.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.524 --rc genhtml_branch_coverage=1 00:04:30.524 --rc genhtml_function_coverage=1 00:04:30.524 --rc genhtml_legend=1 00:04:30.524 --rc geninfo_all_blocks=1 00:04:30.524 --rc geninfo_unexecuted_blocks=1 00:04:30.524 00:04:30.524 ' 00:04:30.524 18:09:43 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:30.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.524 --rc genhtml_branch_coverage=1 00:04:30.524 --rc genhtml_function_coverage=1 00:04:30.524 --rc genhtml_legend=1 00:04:30.524 --rc geninfo_all_blocks=1 00:04:30.524 --rc geninfo_unexecuted_blocks=1 00:04:30.524 00:04:30.524 ' 00:04:30.524 18:09:43 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:30.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.524 --rc genhtml_branch_coverage=1 00:04:30.524 --rc genhtml_function_coverage=1 00:04:30.524 --rc genhtml_legend=1 00:04:30.524 --rc geninfo_all_blocks=1 00:04:30.524 --rc geninfo_unexecuted_blocks=1 00:04:30.524 00:04:30.524 ' 00:04:30.524 18:09:43 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:04:30.524 18:09:43 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:30.524 18:09:43 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:30.524 18:09:43 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:30.524 18:09:43 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:30.524 18:09:43 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:30.524 18:09:43 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:30.524 18:09:43 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:30.524 18:09:43 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:30.524 18:09:43 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:30.524 18:09:43 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:30.524 18:09:43 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:30.524 18:09:43 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:04:30.524 18:09:43 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:04:30.524 18:09:43 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:30.524 18:09:43 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:30.524 18:09:43 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:30.524 18:09:43 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:30.524 18:09:43 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:04:30.524 18:09:43 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:30.524 18:09:43 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:30.524 18:09:43 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:30.524 18:09:43 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:30.524 18:09:43 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.524 18:09:43 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.524 18:09:43 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.524 18:09:43 json_config -- paths/export.sh@5 -- # export PATH 00:04:30.524 18:09:43 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.524 18:09:43 json_config -- nvmf/common.sh@51 -- # : 0 00:04:30.524 18:09:43 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:30.524 18:09:43 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:30.524 18:09:43 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:30.524 18:09:43 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:30.524 18:09:43 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:30.524 18:09:43 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:30.524 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:30.524 18:09:43 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:30.524 18:09:43 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:30.524 18:09:43 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:30.524 18:09:43 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:04:30.524 18:09:43 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:30.524 18:09:43 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:30.524 18:09:43 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:30.524 18:09:43 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:30.524 18:09:43 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:30.524 18:09:43 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:30.524 18:09:43 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:30.524 18:09:43 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:30.524 18:09:43 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:30.524 18:09:43 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:30.524 18:09:43 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:04:30.524 18:09:43 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:30.524 18:09:43 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:30.524 18:09:43 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:30.524 18:09:43 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:30.524 INFO: JSON configuration test init 00:04:30.524 18:09:43 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:30.524 18:09:43 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:30.524 18:09:43 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:30.524 18:09:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.524 18:09:43 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:30.524 18:09:43 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:30.525 18:09:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.525 18:09:43 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:30.525 18:09:43 json_config -- json_config/common.sh@9 -- # local app=target 00:04:30.525 18:09:43 json_config -- json_config/common.sh@10 -- # shift 00:04:30.525 18:09:43 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:30.525 18:09:43 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:30.525 18:09:43 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:30.525 18:09:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.525 18:09:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.525 18:09:43 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3265207 00:04:30.525 18:09:43 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:30.525 Waiting for target to run... 00:04:30.525 18:09:43 json_config -- json_config/common.sh@25 -- # waitforlisten 3265207 /var/tmp/spdk_tgt.sock 00:04:30.525 18:09:43 json_config -- common/autotest_common.sh@831 -- # '[' -z 3265207 ']' 00:04:30.525 18:09:43 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:30.525 18:09:43 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:30.525 18:09:43 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:30.525 18:09:43 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:30.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:30.525 18:09:43 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:30.525 18:09:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.525 [2024-10-08 18:09:43.557708] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:04:30.525 [2024-10-08 18:09:43.557767] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3265207 ] 00:04:31.093 [2024-10-08 18:09:44.135949] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.093 [2024-10-08 18:09:44.226702] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.351 18:09:44 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:31.351 18:09:44 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:31.351 18:09:44 json_config -- json_config/common.sh@26 -- # echo '' 00:04:31.351 00:04:31.351 18:09:44 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:31.351 18:09:44 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:31.351 18:09:44 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:31.351 18:09:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.351 18:09:44 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:31.351 18:09:44 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:31.351 18:09:44 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:31.351 18:09:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.351 18:09:44 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:31.351 18:09:44 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:31.351 18:09:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:34.640 18:09:47 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:34.640 18:09:47 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:34.640 18:09:47 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:34.640 18:09:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.640 18:09:47 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:34.640 18:09:47 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:34.640 18:09:47 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:34.640 18:09:47 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:34.640 18:09:47 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:34.640 18:09:47 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:34.640 18:09:47 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:34.640 18:09:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:34.640 18:09:47 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:34.640 18:09:47 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:34.640 18:09:47 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:34.640 18:09:47 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:34.640 18:09:47 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:34.640 18:09:47 json_config -- json_config/json_config.sh@54 -- # sort 00:04:34.640 18:09:47 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:34.640 18:09:47 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:34.640 18:09:47 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:34.640 18:09:47 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:34.640 18:09:47 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:34.640 18:09:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.900 18:09:47 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:34.900 18:09:47 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:34.900 18:09:47 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:34.900 18:09:47 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:34.900 18:09:47 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:34.900 18:09:47 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:34.900 18:09:47 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:34.900 18:09:47 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:34.900 18:09:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.900 18:09:47 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:34.900 18:09:47 json_config -- json_config/json_config.sh@240 -- # [[ rdma == \r\d\m\a ]] 00:04:34.900 18:09:47 json_config -- json_config/json_config.sh@241 -- # TEST_TRANSPORT=rdma 00:04:34.900 18:09:47 json_config -- json_config/json_config.sh@241 -- # nvmftestinit 00:04:34.900 18:09:47 json_config -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:04:34.900 18:09:47 json_config -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:34.900 18:09:47 json_config -- nvmf/common.sh@474 -- # prepare_net_devs 00:04:34.900 18:09:47 json_config -- nvmf/common.sh@436 -- # local -g is_hw=no 00:04:34.900 18:09:47 json_config -- nvmf/common.sh@438 -- # remove_spdk_ns 00:04:34.900 18:09:47 json_config -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:34.900 18:09:47 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:04:34.900 18:09:47 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:34.900 18:09:47 json_config -- nvmf/common.sh@440 -- # [[ phy-fallback != virt ]] 00:04:34.900 18:09:47 json_config -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:04:34.900 18:09:47 json_config -- nvmf/common.sh@309 -- # xtrace_disable 00:04:34.900 18:09:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@315 -- # pci_devs=() 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@315 -- # local -a pci_devs 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@316 -- # pci_net_devs=() 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@317 -- # pci_drivers=() 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@317 -- # local -A pci_drivers 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@319 -- # net_devs=() 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@319 -- # local -ga net_devs 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@320 -- # e810=() 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@320 -- # local -ga e810 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@321 -- # x722=() 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@321 -- # local -ga x722 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@322 -- # mlx=() 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@322 -- # local -ga mlx 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:04:41.463 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:04:41.463 18:09:54 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:04:41.464 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:04:41.464 Found net devices under 0000:18:00.0: mlx_0_0 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:04:41.464 Found net devices under 0000:18:00.1: mlx_0_1 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@440 -- # is_hw=yes 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@446 -- # rdma_device_init 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@62 -- # uname 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@66 -- # modprobe ib_cm 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@67 -- # modprobe ib_core 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@68 -- # modprobe ib_umad 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@70 -- # modprobe iw_cm 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@528 -- # allocate_nic_ips 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@77 -- # get_rdma_if_list 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@109 -- # continue 2 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@109 -- # continue 2 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:04:41.464 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:04:41.464 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:04:41.464 altname enp24s0f0np0 00:04:41.464 altname ens785f0np0 00:04:41.464 inet 192.168.100.8/24 scope global mlx_0_0 00:04:41.464 valid_lft forever preferred_lft forever 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:04:41.464 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:04:41.464 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:04:41.464 altname enp24s0f1np1 00:04:41.464 altname ens785f1np1 00:04:41.464 inet 192.168.100.9/24 scope global mlx_0_1 00:04:41.464 valid_lft forever preferred_lft forever 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@448 -- # return 0 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@90 -- # get_rdma_if_list 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@109 -- # continue 2 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@109 -- # continue 2 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:04:41.464 192.168.100.9' 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:04:41.464 192.168.100.9' 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@483 -- # head -n 1 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:04:41.464 192.168.100.9' 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@484 -- # tail -n +2 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@484 -- # head -n 1 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:04:41.464 18:09:54 json_config -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:04:41.723 18:09:54 json_config -- json_config/json_config.sh@244 -- # [[ -z 192.168.100.8 ]] 00:04:41.723 18:09:54 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:41.723 18:09:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:41.723 MallocForNvmf0 00:04:41.723 18:09:54 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:41.723 18:09:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:41.982 MallocForNvmf1 00:04:41.982 18:09:55 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:04:41.982 18:09:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:04:42.241 [2024-10-08 18:09:55.250022] rdma.c:2735:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:04:42.241 [2024-10-08 18:09:55.287045] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x146d020/0x147e530) succeed. 00:04:42.241 [2024-10-08 18:09:55.300389] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x146f210/0x14fe5c0) succeed. 00:04:42.241 18:09:55 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:42.241 18:09:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:42.499 18:09:55 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:42.499 18:09:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:42.758 18:09:55 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:42.758 18:09:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:43.017 18:09:55 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:04:43.017 18:09:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:04:43.017 [2024-10-08 18:09:56.110311] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:04:43.017 18:09:56 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:43.017 18:09:56 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:43.017 18:09:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.017 18:09:56 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:43.017 18:09:56 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:43.017 18:09:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.275 18:09:56 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:43.275 18:09:56 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:43.275 18:09:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:43.275 MallocBdevForConfigChangeCheck 00:04:43.275 18:09:56 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:43.275 18:09:56 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:43.275 18:09:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.534 18:09:56 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:43.534 18:09:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:43.793 18:09:56 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:43.793 INFO: shutting down applications... 00:04:43.793 18:09:56 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:43.793 18:09:56 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:43.793 18:09:56 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:43.793 18:09:56 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:47.982 Calling clear_iscsi_subsystem 00:04:47.982 Calling clear_nvmf_subsystem 00:04:47.982 Calling clear_nbd_subsystem 00:04:47.982 Calling clear_ublk_subsystem 00:04:47.982 Calling clear_vhost_blk_subsystem 00:04:47.982 Calling clear_vhost_scsi_subsystem 00:04:47.982 Calling clear_bdev_subsystem 00:04:47.982 18:10:00 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:04:47.982 18:10:00 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:47.982 18:10:00 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:47.982 18:10:00 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:47.982 18:10:00 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:47.982 18:10:00 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:47.982 18:10:01 json_config -- json_config/json_config.sh@352 -- # break 00:04:47.982 18:10:01 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:47.982 18:10:01 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:47.982 18:10:01 json_config -- json_config/common.sh@31 -- # local app=target 00:04:47.982 18:10:01 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:47.982 18:10:01 json_config -- json_config/common.sh@35 -- # [[ -n 3265207 ]] 00:04:47.982 18:10:01 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3265207 00:04:47.982 18:10:01 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:47.982 18:10:01 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:47.982 18:10:01 json_config -- json_config/common.sh@41 -- # kill -0 3265207 00:04:47.982 18:10:01 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:48.551 18:10:01 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:48.551 18:10:01 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:48.551 18:10:01 json_config -- json_config/common.sh@41 -- # kill -0 3265207 00:04:48.551 18:10:01 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:48.551 18:10:01 json_config -- json_config/common.sh@43 -- # break 00:04:48.551 18:10:01 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:48.551 18:10:01 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:48.551 SPDK target shutdown done 00:04:48.551 18:10:01 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:48.551 INFO: relaunching applications... 00:04:48.551 18:10:01 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:48.551 18:10:01 json_config -- json_config/common.sh@9 -- # local app=target 00:04:48.551 18:10:01 json_config -- json_config/common.sh@10 -- # shift 00:04:48.551 18:10:01 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:48.551 18:10:01 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:48.551 18:10:01 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:48.551 18:10:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:48.551 18:10:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:48.551 18:10:01 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3269646 00:04:48.551 18:10:01 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:48.551 Waiting for target to run... 00:04:48.551 18:10:01 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:48.551 18:10:01 json_config -- json_config/common.sh@25 -- # waitforlisten 3269646 /var/tmp/spdk_tgt.sock 00:04:48.551 18:10:01 json_config -- common/autotest_common.sh@831 -- # '[' -z 3269646 ']' 00:04:48.551 18:10:01 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:48.551 18:10:01 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:48.551 18:10:01 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:48.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:48.551 18:10:01 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:48.551 18:10:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.551 [2024-10-08 18:10:01.617876] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:04:48.551 [2024-10-08 18:10:01.617942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3269646 ] 00:04:49.119 [2024-10-08 18:10:02.198138] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.377 [2024-10-08 18:10:02.298007] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.665 [2024-10-08 18:10:05.363708] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x12f6bc0/0x12901a0) succeed. 00:04:52.665 [2024-10-08 18:10:05.375217] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x12f8db0/0x1325240) succeed. 00:04:52.665 [2024-10-08 18:10:05.431844] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:04:52.924 18:10:05 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:52.924 18:10:05 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:52.924 18:10:05 json_config -- json_config/common.sh@26 -- # echo '' 00:04:52.924 00:04:52.924 18:10:05 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:52.924 18:10:05 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:52.924 INFO: Checking if target configuration is the same... 00:04:52.924 18:10:05 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:52.924 18:10:05 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:52.924 18:10:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:52.924 + '[' 2 -ne 2 ']' 00:04:52.924 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:52.924 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:04:52.924 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:52.924 +++ basename /dev/fd/62 00:04:52.924 ++ mktemp /tmp/62.XXX 00:04:52.924 + tmp_file_1=/tmp/62.5bJ 00:04:52.924 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:52.924 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:52.924 + tmp_file_2=/tmp/spdk_tgt_config.json.jJu 00:04:52.924 + ret=0 00:04:52.924 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:53.182 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:53.182 + diff -u /tmp/62.5bJ /tmp/spdk_tgt_config.json.jJu 00:04:53.182 + echo 'INFO: JSON config files are the same' 00:04:53.182 INFO: JSON config files are the same 00:04:53.182 + rm /tmp/62.5bJ /tmp/spdk_tgt_config.json.jJu 00:04:53.182 + exit 0 00:04:53.182 18:10:06 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:53.182 18:10:06 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:53.182 INFO: changing configuration and checking if this can be detected... 00:04:53.182 18:10:06 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:53.182 18:10:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:53.441 18:10:06 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:53.441 18:10:06 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:53.441 18:10:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:53.441 + '[' 2 -ne 2 ']' 00:04:53.441 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:53.441 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:04:53.441 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:53.441 +++ basename /dev/fd/62 00:04:53.441 ++ mktemp /tmp/62.XXX 00:04:53.441 + tmp_file_1=/tmp/62.Tma 00:04:53.441 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:53.441 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:53.441 + tmp_file_2=/tmp/spdk_tgt_config.json.uN3 00:04:53.441 + ret=0 00:04:53.441 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:53.699 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:53.958 + diff -u /tmp/62.Tma /tmp/spdk_tgt_config.json.uN3 00:04:53.958 + ret=1 00:04:53.958 + echo '=== Start of file: /tmp/62.Tma ===' 00:04:53.958 + cat /tmp/62.Tma 00:04:53.958 + echo '=== End of file: /tmp/62.Tma ===' 00:04:53.958 + echo '' 00:04:53.958 + echo '=== Start of file: /tmp/spdk_tgt_config.json.uN3 ===' 00:04:53.958 + cat /tmp/spdk_tgt_config.json.uN3 00:04:53.958 + echo '=== End of file: /tmp/spdk_tgt_config.json.uN3 ===' 00:04:53.958 + echo '' 00:04:53.958 + rm /tmp/62.Tma /tmp/spdk_tgt_config.json.uN3 00:04:53.958 + exit 1 00:04:53.958 18:10:06 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:53.958 INFO: configuration change detected. 00:04:53.958 18:10:06 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:53.958 18:10:06 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:53.958 18:10:06 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:53.958 18:10:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.958 18:10:06 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:53.958 18:10:06 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:53.958 18:10:06 json_config -- json_config/json_config.sh@324 -- # [[ -n 3269646 ]] 00:04:53.958 18:10:06 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:53.958 18:10:06 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:53.958 18:10:06 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:53.958 18:10:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.958 18:10:06 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:53.958 18:10:06 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:53.958 18:10:06 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:53.958 18:10:06 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:53.958 18:10:06 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:53.958 18:10:06 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:53.958 18:10:06 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:53.958 18:10:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.958 18:10:06 json_config -- json_config/json_config.sh@330 -- # killprocess 3269646 00:04:53.958 18:10:06 json_config -- common/autotest_common.sh@950 -- # '[' -z 3269646 ']' 00:04:53.958 18:10:06 json_config -- common/autotest_common.sh@954 -- # kill -0 3269646 00:04:53.958 18:10:06 json_config -- common/autotest_common.sh@955 -- # uname 00:04:53.958 18:10:06 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:53.958 18:10:06 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3269646 00:04:53.958 18:10:07 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:53.958 18:10:07 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:53.958 18:10:07 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3269646' 00:04:53.958 killing process with pid 3269646 00:04:53.958 18:10:07 json_config -- common/autotest_common.sh@969 -- # kill 3269646 00:04:53.958 18:10:07 json_config -- common/autotest_common.sh@974 -- # wait 3269646 00:04:58.146 18:10:10 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:58.146 18:10:10 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:58.146 18:10:10 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:58.146 18:10:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.146 18:10:10 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:58.146 18:10:10 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:58.146 INFO: Success 00:04:58.146 18:10:10 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:04:58.146 18:10:10 json_config -- nvmf/common.sh@514 -- # nvmfcleanup 00:04:58.146 18:10:10 json_config -- nvmf/common.sh@121 -- # sync 00:04:58.146 18:10:10 json_config -- nvmf/common.sh@123 -- # '[' '' == tcp ']' 00:04:58.146 18:10:10 json_config -- nvmf/common.sh@123 -- # '[' '' == rdma ']' 00:04:58.146 18:10:10 json_config -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:04:58.146 18:10:10 json_config -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:04:58.146 18:10:10 json_config -- nvmf/common.sh@521 -- # [[ '' == \t\c\p ]] 00:04:58.146 00:04:58.146 real 0m27.687s 00:04:58.146 user 0m29.635s 00:04:58.146 sys 0m8.596s 00:04:58.146 18:10:10 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.146 18:10:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.146 ************************************ 00:04:58.146 END TEST json_config 00:04:58.146 ************************************ 00:04:58.146 18:10:11 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:58.146 18:10:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:58.146 18:10:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.146 18:10:11 -- common/autotest_common.sh@10 -- # set +x 00:04:58.146 ************************************ 00:04:58.146 START TEST json_config_extra_key 00:04:58.146 ************************************ 00:04:58.146 18:10:11 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:58.146 18:10:11 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:58.146 18:10:11 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:04:58.146 18:10:11 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:58.146 18:10:11 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:58.146 18:10:11 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.146 18:10:11 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.146 18:10:11 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.146 18:10:11 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.146 18:10:11 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.146 18:10:11 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.146 18:10:11 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.146 18:10:11 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.146 18:10:11 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.146 18:10:11 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.146 18:10:11 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.146 18:10:11 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:58.146 18:10:11 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:58.146 18:10:11 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.146 18:10:11 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.146 18:10:11 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:58.146 18:10:11 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:58.146 18:10:11 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.146 18:10:11 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:58.146 18:10:11 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.146 18:10:11 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:58.146 18:10:11 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:58.146 18:10:11 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.146 18:10:11 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:58.146 18:10:11 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.146 18:10:11 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.146 18:10:11 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.146 18:10:11 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:58.146 18:10:11 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.146 18:10:11 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:58.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.146 --rc genhtml_branch_coverage=1 00:04:58.146 --rc genhtml_function_coverage=1 00:04:58.146 --rc genhtml_legend=1 00:04:58.146 --rc geninfo_all_blocks=1 00:04:58.146 --rc geninfo_unexecuted_blocks=1 00:04:58.146 00:04:58.146 ' 00:04:58.146 18:10:11 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:58.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.146 --rc genhtml_branch_coverage=1 00:04:58.146 --rc genhtml_function_coverage=1 00:04:58.146 --rc genhtml_legend=1 00:04:58.146 --rc geninfo_all_blocks=1 00:04:58.146 --rc geninfo_unexecuted_blocks=1 00:04:58.146 00:04:58.146 ' 00:04:58.146 18:10:11 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:58.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.146 --rc genhtml_branch_coverage=1 00:04:58.146 --rc genhtml_function_coverage=1 00:04:58.146 --rc genhtml_legend=1 00:04:58.146 --rc geninfo_all_blocks=1 00:04:58.146 --rc geninfo_unexecuted_blocks=1 00:04:58.146 00:04:58.146 ' 00:04:58.146 18:10:11 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:58.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.146 --rc genhtml_branch_coverage=1 00:04:58.146 --rc genhtml_function_coverage=1 00:04:58.146 --rc genhtml_legend=1 00:04:58.146 --rc geninfo_all_blocks=1 00:04:58.146 --rc geninfo_unexecuted_blocks=1 00:04:58.146 00:04:58.146 ' 00:04:58.146 18:10:11 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:04:58.146 18:10:11 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:58.146 18:10:11 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:58.147 18:10:11 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:58.147 18:10:11 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:58.147 18:10:11 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:58.147 18:10:11 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:58.147 18:10:11 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:58.147 18:10:11 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:58.147 18:10:11 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:58.147 18:10:11 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:58.147 18:10:11 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:58.147 18:10:11 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:04:58.147 18:10:11 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:04:58.147 18:10:11 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:58.147 18:10:11 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:58.147 18:10:11 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:58.147 18:10:11 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:58.147 18:10:11 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:04:58.147 18:10:11 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:58.147 18:10:11 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:58.147 18:10:11 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:58.147 18:10:11 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:58.147 18:10:11 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.147 18:10:11 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.147 18:10:11 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.147 18:10:11 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:58.147 18:10:11 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.147 18:10:11 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:58.147 18:10:11 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:58.147 18:10:11 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:58.147 18:10:11 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:58.147 18:10:11 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:58.147 18:10:11 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:58.147 18:10:11 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:58.147 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:58.147 18:10:11 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:58.147 18:10:11 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:58.147 18:10:11 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:58.147 18:10:11 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:04:58.147 18:10:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:58.147 18:10:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:58.147 18:10:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:58.147 18:10:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:58.147 18:10:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:58.147 18:10:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:58.147 18:10:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:58.147 18:10:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:58.147 18:10:11 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:58.147 18:10:11 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:58.147 INFO: launching applications... 00:04:58.147 18:10:11 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:04:58.147 18:10:11 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:58.147 18:10:11 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:58.147 18:10:11 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:58.147 18:10:11 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:58.147 18:10:11 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:58.147 18:10:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:58.147 18:10:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:58.147 18:10:11 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3271067 00:04:58.147 18:10:11 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:58.147 Waiting for target to run... 00:04:58.147 18:10:11 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3271067 /var/tmp/spdk_tgt.sock 00:04:58.147 18:10:11 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 3271067 ']' 00:04:58.147 18:10:11 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:04:58.147 18:10:11 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:58.147 18:10:11 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:58.147 18:10:11 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:58.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:58.147 18:10:11 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:58.147 18:10:11 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:58.147 [2024-10-08 18:10:11.316033] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:04:58.147 [2024-10-08 18:10:11.316095] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3271067 ] 00:04:58.716 [2024-10-08 18:10:11.646849] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.716 [2024-10-08 18:10:11.720577] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.976 18:10:12 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:58.976 18:10:12 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:58.976 18:10:12 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:58.976 00:04:58.976 18:10:12 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:58.976 INFO: shutting down applications... 00:04:58.976 18:10:12 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:58.976 18:10:12 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:58.976 18:10:12 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:59.235 18:10:12 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3271067 ]] 00:04:59.235 18:10:12 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3271067 00:04:59.235 18:10:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:59.235 18:10:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:59.235 18:10:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3271067 00:04:59.235 18:10:12 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:59.495 18:10:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:59.495 18:10:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:59.495 18:10:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3271067 00:04:59.495 18:10:12 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:59.495 18:10:12 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:59.495 18:10:12 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:59.495 18:10:12 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:59.495 SPDK target shutdown done 00:04:59.495 18:10:12 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:59.495 Success 00:04:59.495 00:04:59.495 real 0m1.602s 00:04:59.495 user 0m1.388s 00:04:59.495 sys 0m0.482s 00:04:59.495 18:10:12 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.495 18:10:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:59.495 ************************************ 00:04:59.495 END TEST json_config_extra_key 00:04:59.495 ************************************ 00:04:59.754 18:10:12 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:59.754 18:10:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.754 18:10:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.754 18:10:12 -- common/autotest_common.sh@10 -- # set +x 00:04:59.754 ************************************ 00:04:59.754 START TEST alias_rpc 00:04:59.754 ************************************ 00:04:59.754 18:10:12 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:59.754 * Looking for test storage... 00:04:59.754 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:04:59.754 18:10:12 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:59.754 18:10:12 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:59.754 18:10:12 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:59.754 18:10:12 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:59.754 18:10:12 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.754 18:10:12 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.754 18:10:12 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.754 18:10:12 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.754 18:10:12 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.754 18:10:12 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.754 18:10:12 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.754 18:10:12 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.754 18:10:12 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.754 18:10:12 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.754 18:10:12 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.754 18:10:12 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:59.754 18:10:12 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:59.754 18:10:12 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.754 18:10:12 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.754 18:10:12 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:59.754 18:10:12 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:59.754 18:10:12 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.754 18:10:12 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:59.754 18:10:12 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.014 18:10:12 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:00.014 18:10:12 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:00.014 18:10:12 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.014 18:10:12 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:00.014 18:10:12 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.014 18:10:12 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.014 18:10:12 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.014 18:10:12 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:00.014 18:10:12 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.014 18:10:12 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:00.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.014 --rc genhtml_branch_coverage=1 00:05:00.014 --rc genhtml_function_coverage=1 00:05:00.014 --rc genhtml_legend=1 00:05:00.014 --rc geninfo_all_blocks=1 00:05:00.014 --rc geninfo_unexecuted_blocks=1 00:05:00.014 00:05:00.014 ' 00:05:00.014 18:10:12 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:00.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.014 --rc genhtml_branch_coverage=1 00:05:00.014 --rc genhtml_function_coverage=1 00:05:00.014 --rc genhtml_legend=1 00:05:00.014 --rc geninfo_all_blocks=1 00:05:00.014 --rc geninfo_unexecuted_blocks=1 00:05:00.014 00:05:00.014 ' 00:05:00.014 18:10:12 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:00.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.014 --rc genhtml_branch_coverage=1 00:05:00.014 --rc genhtml_function_coverage=1 00:05:00.014 --rc genhtml_legend=1 00:05:00.014 --rc geninfo_all_blocks=1 00:05:00.014 --rc geninfo_unexecuted_blocks=1 00:05:00.014 00:05:00.014 ' 00:05:00.015 18:10:12 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:00.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.015 --rc genhtml_branch_coverage=1 00:05:00.015 --rc genhtml_function_coverage=1 00:05:00.015 --rc genhtml_legend=1 00:05:00.015 --rc geninfo_all_blocks=1 00:05:00.015 --rc geninfo_unexecuted_blocks=1 00:05:00.015 00:05:00.015 ' 00:05:00.015 18:10:12 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:00.015 18:10:12 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3271447 00:05:00.015 18:10:12 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.015 18:10:12 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3271447 00:05:00.015 18:10:12 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 3271447 ']' 00:05:00.015 18:10:12 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.015 18:10:12 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:00.015 18:10:12 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.015 18:10:12 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:00.015 18:10:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.015 [2024-10-08 18:10:12.985898] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:05:00.015 [2024-10-08 18:10:12.985961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3271447 ] 00:05:00.015 [2024-10-08 18:10:13.067321] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.015 [2024-10-08 18:10:13.148928] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.952 18:10:13 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:00.952 18:10:13 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:00.952 18:10:13 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:00.952 18:10:14 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3271447 00:05:00.952 18:10:14 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 3271447 ']' 00:05:00.952 18:10:14 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 3271447 00:05:00.952 18:10:14 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:00.952 18:10:14 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:00.952 18:10:14 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3271447 00:05:00.952 18:10:14 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:00.952 18:10:14 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:00.952 18:10:14 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3271447' 00:05:00.952 killing process with pid 3271447 00:05:00.952 18:10:14 alias_rpc -- common/autotest_common.sh@969 -- # kill 3271447 00:05:00.952 18:10:14 alias_rpc -- common/autotest_common.sh@974 -- # wait 3271447 00:05:01.521 00:05:01.521 real 0m1.734s 00:05:01.521 user 0m1.838s 00:05:01.521 sys 0m0.513s 00:05:01.521 18:10:14 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.521 18:10:14 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.521 ************************************ 00:05:01.521 END TEST alias_rpc 00:05:01.521 ************************************ 00:05:01.521 18:10:14 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:01.521 18:10:14 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:01.521 18:10:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.521 18:10:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.521 18:10:14 -- common/autotest_common.sh@10 -- # set +x 00:05:01.521 ************************************ 00:05:01.521 START TEST spdkcli_tcp 00:05:01.521 ************************************ 00:05:01.521 18:10:14 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:01.521 * Looking for test storage... 00:05:01.521 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:05:01.521 18:10:14 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:01.521 18:10:14 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:01.521 18:10:14 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:01.781 18:10:14 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:01.781 18:10:14 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.781 18:10:14 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.781 18:10:14 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.781 18:10:14 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.781 18:10:14 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.781 18:10:14 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.781 18:10:14 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.781 18:10:14 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.781 18:10:14 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.781 18:10:14 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.781 18:10:14 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.781 18:10:14 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:01.781 18:10:14 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:01.781 18:10:14 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.781 18:10:14 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.781 18:10:14 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:01.781 18:10:14 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:01.781 18:10:14 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.781 18:10:14 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:01.781 18:10:14 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.781 18:10:14 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:01.781 18:10:14 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:01.781 18:10:14 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.781 18:10:14 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:01.781 18:10:14 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.781 18:10:14 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.781 18:10:14 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.781 18:10:14 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:01.781 18:10:14 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.781 18:10:14 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:01.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.781 --rc genhtml_branch_coverage=1 00:05:01.781 --rc genhtml_function_coverage=1 00:05:01.781 --rc genhtml_legend=1 00:05:01.781 --rc geninfo_all_blocks=1 00:05:01.781 --rc geninfo_unexecuted_blocks=1 00:05:01.781 00:05:01.781 ' 00:05:01.781 18:10:14 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:01.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.781 --rc genhtml_branch_coverage=1 00:05:01.781 --rc genhtml_function_coverage=1 00:05:01.781 --rc genhtml_legend=1 00:05:01.781 --rc geninfo_all_blocks=1 00:05:01.781 --rc geninfo_unexecuted_blocks=1 00:05:01.781 00:05:01.781 ' 00:05:01.781 18:10:14 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:01.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.781 --rc genhtml_branch_coverage=1 00:05:01.781 --rc genhtml_function_coverage=1 00:05:01.781 --rc genhtml_legend=1 00:05:01.781 --rc geninfo_all_blocks=1 00:05:01.781 --rc geninfo_unexecuted_blocks=1 00:05:01.781 00:05:01.781 ' 00:05:01.781 18:10:14 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:01.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.781 --rc genhtml_branch_coverage=1 00:05:01.781 --rc genhtml_function_coverage=1 00:05:01.781 --rc genhtml_legend=1 00:05:01.781 --rc geninfo_all_blocks=1 00:05:01.781 --rc geninfo_unexecuted_blocks=1 00:05:01.781 00:05:01.781 ' 00:05:01.781 18:10:14 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:05:01.781 18:10:14 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:01.781 18:10:14 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:05:01.781 18:10:14 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:01.781 18:10:14 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:01.781 18:10:14 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:01.781 18:10:14 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:01.781 18:10:14 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:01.781 18:10:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:01.781 18:10:14 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3271700 00:05:01.781 18:10:14 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:01.781 18:10:14 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3271700 00:05:01.781 18:10:14 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 3271700 ']' 00:05:01.781 18:10:14 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.781 18:10:14 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:01.781 18:10:14 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.781 18:10:14 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:01.781 18:10:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:01.781 [2024-10-08 18:10:14.826014] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:05:01.781 [2024-10-08 18:10:14.826076] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3271700 ] 00:05:01.781 [2024-10-08 18:10:14.912098] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.040 [2024-10-08 18:10:15.002696] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.040 [2024-10-08 18:10:15.002697] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.606 18:10:15 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:02.606 18:10:15 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:02.606 18:10:15 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3271882 00:05:02.606 18:10:15 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:02.606 18:10:15 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:02.865 [ 00:05:02.865 "bdev_malloc_delete", 00:05:02.865 "bdev_malloc_create", 00:05:02.865 "bdev_null_resize", 00:05:02.865 "bdev_null_delete", 00:05:02.865 "bdev_null_create", 00:05:02.865 "bdev_nvme_cuse_unregister", 00:05:02.865 "bdev_nvme_cuse_register", 00:05:02.865 "bdev_opal_new_user", 00:05:02.865 "bdev_opal_set_lock_state", 00:05:02.865 "bdev_opal_delete", 00:05:02.865 "bdev_opal_get_info", 00:05:02.865 "bdev_opal_create", 00:05:02.865 "bdev_nvme_opal_revert", 00:05:02.865 "bdev_nvme_opal_init", 00:05:02.865 "bdev_nvme_send_cmd", 00:05:02.865 "bdev_nvme_set_keys", 00:05:02.865 "bdev_nvme_get_path_iostat", 00:05:02.865 "bdev_nvme_get_mdns_discovery_info", 00:05:02.865 "bdev_nvme_stop_mdns_discovery", 00:05:02.865 "bdev_nvme_start_mdns_discovery", 00:05:02.865 "bdev_nvme_set_multipath_policy", 00:05:02.865 "bdev_nvme_set_preferred_path", 00:05:02.865 "bdev_nvme_get_io_paths", 00:05:02.865 "bdev_nvme_remove_error_injection", 00:05:02.865 "bdev_nvme_add_error_injection", 00:05:02.865 "bdev_nvme_get_discovery_info", 00:05:02.865 "bdev_nvme_stop_discovery", 00:05:02.865 "bdev_nvme_start_discovery", 00:05:02.865 "bdev_nvme_get_controller_health_info", 00:05:02.865 "bdev_nvme_disable_controller", 00:05:02.865 "bdev_nvme_enable_controller", 00:05:02.865 "bdev_nvme_reset_controller", 00:05:02.865 "bdev_nvme_get_transport_statistics", 00:05:02.865 "bdev_nvme_apply_firmware", 00:05:02.865 "bdev_nvme_detach_controller", 00:05:02.865 "bdev_nvme_get_controllers", 00:05:02.865 "bdev_nvme_attach_controller", 00:05:02.865 "bdev_nvme_set_hotplug", 00:05:02.865 "bdev_nvme_set_options", 00:05:02.865 "bdev_passthru_delete", 00:05:02.865 "bdev_passthru_create", 00:05:02.865 "bdev_lvol_set_parent_bdev", 00:05:02.865 "bdev_lvol_set_parent", 00:05:02.865 "bdev_lvol_check_shallow_copy", 00:05:02.865 "bdev_lvol_start_shallow_copy", 00:05:02.865 "bdev_lvol_grow_lvstore", 00:05:02.865 "bdev_lvol_get_lvols", 00:05:02.865 "bdev_lvol_get_lvstores", 00:05:02.865 "bdev_lvol_delete", 00:05:02.865 "bdev_lvol_set_read_only", 00:05:02.865 "bdev_lvol_resize", 00:05:02.865 "bdev_lvol_decouple_parent", 00:05:02.865 "bdev_lvol_inflate", 00:05:02.865 "bdev_lvol_rename", 00:05:02.865 "bdev_lvol_clone_bdev", 00:05:02.865 "bdev_lvol_clone", 00:05:02.865 "bdev_lvol_snapshot", 00:05:02.865 "bdev_lvol_create", 00:05:02.865 "bdev_lvol_delete_lvstore", 00:05:02.865 "bdev_lvol_rename_lvstore", 00:05:02.865 "bdev_lvol_create_lvstore", 00:05:02.865 "bdev_raid_set_options", 00:05:02.865 "bdev_raid_remove_base_bdev", 00:05:02.865 "bdev_raid_add_base_bdev", 00:05:02.865 "bdev_raid_delete", 00:05:02.865 "bdev_raid_create", 00:05:02.865 "bdev_raid_get_bdevs", 00:05:02.865 "bdev_error_inject_error", 00:05:02.865 "bdev_error_delete", 00:05:02.865 "bdev_error_create", 00:05:02.865 "bdev_split_delete", 00:05:02.865 "bdev_split_create", 00:05:02.865 "bdev_delay_delete", 00:05:02.865 "bdev_delay_create", 00:05:02.865 "bdev_delay_update_latency", 00:05:02.865 "bdev_zone_block_delete", 00:05:02.865 "bdev_zone_block_create", 00:05:02.865 "blobfs_create", 00:05:02.865 "blobfs_detect", 00:05:02.865 "blobfs_set_cache_size", 00:05:02.865 "bdev_aio_delete", 00:05:02.865 "bdev_aio_rescan", 00:05:02.865 "bdev_aio_create", 00:05:02.865 "bdev_ftl_set_property", 00:05:02.865 "bdev_ftl_get_properties", 00:05:02.865 "bdev_ftl_get_stats", 00:05:02.865 "bdev_ftl_unmap", 00:05:02.865 "bdev_ftl_unload", 00:05:02.865 "bdev_ftl_delete", 00:05:02.865 "bdev_ftl_load", 00:05:02.865 "bdev_ftl_create", 00:05:02.865 "bdev_virtio_attach_controller", 00:05:02.865 "bdev_virtio_scsi_get_devices", 00:05:02.865 "bdev_virtio_detach_controller", 00:05:02.865 "bdev_virtio_blk_set_hotplug", 00:05:02.865 "bdev_iscsi_delete", 00:05:02.865 "bdev_iscsi_create", 00:05:02.865 "bdev_iscsi_set_options", 00:05:02.865 "accel_error_inject_error", 00:05:02.865 "ioat_scan_accel_module", 00:05:02.865 "dsa_scan_accel_module", 00:05:02.865 "iaa_scan_accel_module", 00:05:02.865 "keyring_file_remove_key", 00:05:02.865 "keyring_file_add_key", 00:05:02.865 "keyring_linux_set_options", 00:05:02.865 "fsdev_aio_delete", 00:05:02.865 "fsdev_aio_create", 00:05:02.865 "iscsi_get_histogram", 00:05:02.865 "iscsi_enable_histogram", 00:05:02.865 "iscsi_set_options", 00:05:02.865 "iscsi_get_auth_groups", 00:05:02.865 "iscsi_auth_group_remove_secret", 00:05:02.865 "iscsi_auth_group_add_secret", 00:05:02.865 "iscsi_delete_auth_group", 00:05:02.865 "iscsi_create_auth_group", 00:05:02.865 "iscsi_set_discovery_auth", 00:05:02.865 "iscsi_get_options", 00:05:02.865 "iscsi_target_node_request_logout", 00:05:02.865 "iscsi_target_node_set_redirect", 00:05:02.866 "iscsi_target_node_set_auth", 00:05:02.866 "iscsi_target_node_add_lun", 00:05:02.866 "iscsi_get_stats", 00:05:02.866 "iscsi_get_connections", 00:05:02.866 "iscsi_portal_group_set_auth", 00:05:02.866 "iscsi_start_portal_group", 00:05:02.866 "iscsi_delete_portal_group", 00:05:02.866 "iscsi_create_portal_group", 00:05:02.866 "iscsi_get_portal_groups", 00:05:02.866 "iscsi_delete_target_node", 00:05:02.866 "iscsi_target_node_remove_pg_ig_maps", 00:05:02.866 "iscsi_target_node_add_pg_ig_maps", 00:05:02.866 "iscsi_create_target_node", 00:05:02.866 "iscsi_get_target_nodes", 00:05:02.866 "iscsi_delete_initiator_group", 00:05:02.866 "iscsi_initiator_group_remove_initiators", 00:05:02.866 "iscsi_initiator_group_add_initiators", 00:05:02.866 "iscsi_create_initiator_group", 00:05:02.866 "iscsi_get_initiator_groups", 00:05:02.866 "nvmf_set_crdt", 00:05:02.866 "nvmf_set_config", 00:05:02.866 "nvmf_set_max_subsystems", 00:05:02.866 "nvmf_stop_mdns_prr", 00:05:02.866 "nvmf_publish_mdns_prr", 00:05:02.866 "nvmf_subsystem_get_listeners", 00:05:02.866 "nvmf_subsystem_get_qpairs", 00:05:02.866 "nvmf_subsystem_get_controllers", 00:05:02.866 "nvmf_get_stats", 00:05:02.866 "nvmf_get_transports", 00:05:02.866 "nvmf_create_transport", 00:05:02.866 "nvmf_get_targets", 00:05:02.866 "nvmf_delete_target", 00:05:02.866 "nvmf_create_target", 00:05:02.866 "nvmf_subsystem_allow_any_host", 00:05:02.866 "nvmf_subsystem_set_keys", 00:05:02.866 "nvmf_subsystem_remove_host", 00:05:02.866 "nvmf_subsystem_add_host", 00:05:02.866 "nvmf_ns_remove_host", 00:05:02.866 "nvmf_ns_add_host", 00:05:02.866 "nvmf_subsystem_remove_ns", 00:05:02.866 "nvmf_subsystem_set_ns_ana_group", 00:05:02.866 "nvmf_subsystem_add_ns", 00:05:02.866 "nvmf_subsystem_listener_set_ana_state", 00:05:02.866 "nvmf_discovery_get_referrals", 00:05:02.866 "nvmf_discovery_remove_referral", 00:05:02.866 "nvmf_discovery_add_referral", 00:05:02.866 "nvmf_subsystem_remove_listener", 00:05:02.866 "nvmf_subsystem_add_listener", 00:05:02.866 "nvmf_delete_subsystem", 00:05:02.866 "nvmf_create_subsystem", 00:05:02.866 "nvmf_get_subsystems", 00:05:02.866 "env_dpdk_get_mem_stats", 00:05:02.866 "nbd_get_disks", 00:05:02.866 "nbd_stop_disk", 00:05:02.866 "nbd_start_disk", 00:05:02.866 "ublk_recover_disk", 00:05:02.866 "ublk_get_disks", 00:05:02.866 "ublk_stop_disk", 00:05:02.866 "ublk_start_disk", 00:05:02.866 "ublk_destroy_target", 00:05:02.866 "ublk_create_target", 00:05:02.866 "virtio_blk_create_transport", 00:05:02.866 "virtio_blk_get_transports", 00:05:02.866 "vhost_controller_set_coalescing", 00:05:02.866 "vhost_get_controllers", 00:05:02.866 "vhost_delete_controller", 00:05:02.866 "vhost_create_blk_controller", 00:05:02.866 "vhost_scsi_controller_remove_target", 00:05:02.866 "vhost_scsi_controller_add_target", 00:05:02.866 "vhost_start_scsi_controller", 00:05:02.866 "vhost_create_scsi_controller", 00:05:02.866 "thread_set_cpumask", 00:05:02.866 "scheduler_set_options", 00:05:02.866 "framework_get_governor", 00:05:02.866 "framework_get_scheduler", 00:05:02.866 "framework_set_scheduler", 00:05:02.866 "framework_get_reactors", 00:05:02.866 "thread_get_io_channels", 00:05:02.866 "thread_get_pollers", 00:05:02.866 "thread_get_stats", 00:05:02.866 "framework_monitor_context_switch", 00:05:02.866 "spdk_kill_instance", 00:05:02.866 "log_enable_timestamps", 00:05:02.866 "log_get_flags", 00:05:02.866 "log_clear_flag", 00:05:02.866 "log_set_flag", 00:05:02.866 "log_get_level", 00:05:02.866 "log_set_level", 00:05:02.866 "log_get_print_level", 00:05:02.866 "log_set_print_level", 00:05:02.866 "framework_enable_cpumask_locks", 00:05:02.866 "framework_disable_cpumask_locks", 00:05:02.866 "framework_wait_init", 00:05:02.866 "framework_start_init", 00:05:02.866 "scsi_get_devices", 00:05:02.866 "bdev_get_histogram", 00:05:02.866 "bdev_enable_histogram", 00:05:02.866 "bdev_set_qos_limit", 00:05:02.866 "bdev_set_qd_sampling_period", 00:05:02.866 "bdev_get_bdevs", 00:05:02.866 "bdev_reset_iostat", 00:05:02.866 "bdev_get_iostat", 00:05:02.866 "bdev_examine", 00:05:02.866 "bdev_wait_for_examine", 00:05:02.866 "bdev_set_options", 00:05:02.866 "accel_get_stats", 00:05:02.866 "accel_set_options", 00:05:02.866 "accel_set_driver", 00:05:02.866 "accel_crypto_key_destroy", 00:05:02.866 "accel_crypto_keys_get", 00:05:02.866 "accel_crypto_key_create", 00:05:02.866 "accel_assign_opc", 00:05:02.866 "accel_get_module_info", 00:05:02.866 "accel_get_opc_assignments", 00:05:02.866 "vmd_rescan", 00:05:02.866 "vmd_remove_device", 00:05:02.866 "vmd_enable", 00:05:02.866 "sock_get_default_impl", 00:05:02.866 "sock_set_default_impl", 00:05:02.866 "sock_impl_set_options", 00:05:02.866 "sock_impl_get_options", 00:05:02.866 "iobuf_get_stats", 00:05:02.866 "iobuf_set_options", 00:05:02.866 "keyring_get_keys", 00:05:02.866 "framework_get_pci_devices", 00:05:02.866 "framework_get_config", 00:05:02.866 "framework_get_subsystems", 00:05:02.866 "fsdev_set_opts", 00:05:02.866 "fsdev_get_opts", 00:05:02.866 "trace_get_info", 00:05:02.866 "trace_get_tpoint_group_mask", 00:05:02.866 "trace_disable_tpoint_group", 00:05:02.866 "trace_enable_tpoint_group", 00:05:02.866 "trace_clear_tpoint_mask", 00:05:02.866 "trace_set_tpoint_mask", 00:05:02.866 "notify_get_notifications", 00:05:02.866 "notify_get_types", 00:05:02.866 "spdk_get_version", 00:05:02.866 "rpc_get_methods" 00:05:02.866 ] 00:05:02.866 18:10:15 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:02.866 18:10:15 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:02.866 18:10:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:02.866 18:10:15 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:02.866 18:10:15 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3271700 00:05:02.866 18:10:15 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 3271700 ']' 00:05:02.866 18:10:15 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 3271700 00:05:02.866 18:10:15 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:02.866 18:10:15 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:02.866 18:10:15 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3271700 00:05:02.866 18:10:15 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:02.866 18:10:15 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:02.866 18:10:15 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3271700' 00:05:02.866 killing process with pid 3271700 00:05:02.866 18:10:15 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 3271700 00:05:02.866 18:10:15 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 3271700 00:05:03.492 00:05:03.492 real 0m1.783s 00:05:03.492 user 0m3.146s 00:05:03.492 sys 0m0.587s 00:05:03.492 18:10:16 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.492 18:10:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:03.492 ************************************ 00:05:03.492 END TEST spdkcli_tcp 00:05:03.492 ************************************ 00:05:03.492 18:10:16 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:03.492 18:10:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:03.492 18:10:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.492 18:10:16 -- common/autotest_common.sh@10 -- # set +x 00:05:03.492 ************************************ 00:05:03.492 START TEST dpdk_mem_utility 00:05:03.492 ************************************ 00:05:03.492 18:10:16 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:03.492 * Looking for test storage... 00:05:03.492 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:05:03.492 18:10:16 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:03.492 18:10:16 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:03.492 18:10:16 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:03.492 18:10:16 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:03.492 18:10:16 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.492 18:10:16 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.492 18:10:16 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.492 18:10:16 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.492 18:10:16 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.492 18:10:16 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.492 18:10:16 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.492 18:10:16 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.492 18:10:16 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.492 18:10:16 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.492 18:10:16 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.492 18:10:16 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:03.492 18:10:16 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:03.492 18:10:16 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.492 18:10:16 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.492 18:10:16 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:03.492 18:10:16 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:03.492 18:10:16 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.492 18:10:16 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:03.492 18:10:16 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.492 18:10:16 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:03.492 18:10:16 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:03.492 18:10:16 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.492 18:10:16 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:03.492 18:10:16 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.492 18:10:16 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.492 18:10:16 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.492 18:10:16 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:03.492 18:10:16 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.492 18:10:16 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:03.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.492 --rc genhtml_branch_coverage=1 00:05:03.492 --rc genhtml_function_coverage=1 00:05:03.492 --rc genhtml_legend=1 00:05:03.492 --rc geninfo_all_blocks=1 00:05:03.492 --rc geninfo_unexecuted_blocks=1 00:05:03.492 00:05:03.492 ' 00:05:03.492 18:10:16 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:03.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.492 --rc genhtml_branch_coverage=1 00:05:03.492 --rc genhtml_function_coverage=1 00:05:03.492 --rc genhtml_legend=1 00:05:03.492 --rc geninfo_all_blocks=1 00:05:03.492 --rc geninfo_unexecuted_blocks=1 00:05:03.492 00:05:03.492 ' 00:05:03.492 18:10:16 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:03.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.492 --rc genhtml_branch_coverage=1 00:05:03.492 --rc genhtml_function_coverage=1 00:05:03.492 --rc genhtml_legend=1 00:05:03.492 --rc geninfo_all_blocks=1 00:05:03.492 --rc geninfo_unexecuted_blocks=1 00:05:03.492 00:05:03.492 ' 00:05:03.492 18:10:16 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:03.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.492 --rc genhtml_branch_coverage=1 00:05:03.492 --rc genhtml_function_coverage=1 00:05:03.492 --rc genhtml_legend=1 00:05:03.492 --rc geninfo_all_blocks=1 00:05:03.492 --rc geninfo_unexecuted_blocks=1 00:05:03.492 00:05:03.492 ' 00:05:03.492 18:10:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:03.492 18:10:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3271970 00:05:03.493 18:10:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:03.493 18:10:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3271970 00:05:03.493 18:10:16 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 3271970 ']' 00:05:03.493 18:10:16 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.493 18:10:16 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:03.493 18:10:16 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.493 18:10:16 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:03.493 18:10:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:03.752 [2024-10-08 18:10:16.679485] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:05:03.752 [2024-10-08 18:10:16.679548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3271970 ] 00:05:03.752 [2024-10-08 18:10:16.760724] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.752 [2024-10-08 18:10:16.842419] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.689 18:10:17 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:04.689 18:10:17 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:04.689 18:10:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:04.689 18:10:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:04.689 18:10:17 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.689 18:10:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:04.689 { 00:05:04.689 "filename": "/tmp/spdk_mem_dump.txt" 00:05:04.689 } 00:05:04.689 18:10:17 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.689 18:10:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:04.689 DPDK memory size 860.000000 MiB in 1 heap(s) 00:05:04.689 1 heaps totaling size 860.000000 MiB 00:05:04.689 size: 860.000000 MiB heap id: 0 00:05:04.689 end heaps---------- 00:05:04.689 9 mempools totaling size 642.649841 MiB 00:05:04.689 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:04.689 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:04.689 size: 92.545471 MiB name: bdev_io_3271970 00:05:04.689 size: 51.011292 MiB name: evtpool_3271970 00:05:04.689 size: 50.003479 MiB name: msgpool_3271970 00:05:04.689 size: 36.509338 MiB name: fsdev_io_3271970 00:05:04.689 size: 21.763794 MiB name: PDU_Pool 00:05:04.689 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:04.689 size: 0.026123 MiB name: Session_Pool 00:05:04.689 end mempools------- 00:05:04.689 6 memzones totaling size 4.142822 MiB 00:05:04.689 size: 1.000366 MiB name: RG_ring_0_3271970 00:05:04.689 size: 1.000366 MiB name: RG_ring_1_3271970 00:05:04.689 size: 1.000366 MiB name: RG_ring_4_3271970 00:05:04.689 size: 1.000366 MiB name: RG_ring_5_3271970 00:05:04.689 size: 0.125366 MiB name: RG_ring_2_3271970 00:05:04.689 size: 0.015991 MiB name: RG_ring_3_3271970 00:05:04.689 end memzones------- 00:05:04.689 18:10:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:04.689 heap id: 0 total size: 860.000000 MiB number of busy elements: 44 number of free elements: 16 00:05:04.689 list of free elements. size: 13.984680 MiB 00:05:04.689 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:04.689 element at address: 0x200000800000 with size: 1.996948 MiB 00:05:04.689 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:05:04.689 element at address: 0x20001be00000 with size: 0.999878 MiB 00:05:04.689 element at address: 0x200034a00000 with size: 0.994446 MiB 00:05:04.689 element at address: 0x200009600000 with size: 0.959839 MiB 00:05:04.689 element at address: 0x200015e00000 with size: 0.954285 MiB 00:05:04.689 element at address: 0x20001c000000 with size: 0.936584 MiB 00:05:04.689 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:04.689 element at address: 0x20001d800000 with size: 0.582886 MiB 00:05:04.689 element at address: 0x200003e00000 with size: 0.495422 MiB 00:05:04.689 element at address: 0x20000d800000 with size: 0.490723 MiB 00:05:04.689 element at address: 0x20001c200000 with size: 0.485657 MiB 00:05:04.689 element at address: 0x200007000000 with size: 0.481934 MiB 00:05:04.689 element at address: 0x20002ac00000 with size: 0.410034 MiB 00:05:04.689 element at address: 0x200003a00000 with size: 0.355042 MiB 00:05:04.689 list of standard malloc elements. size: 199.218628 MiB 00:05:04.689 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:05:04.689 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:05:04.689 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:05:04.689 element at address: 0x20001befff80 with size: 1.000122 MiB 00:05:04.689 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:05:04.689 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:04.689 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:05:04.689 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:04.689 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:05:04.689 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:04.689 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:04.689 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:04.689 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:04.689 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:04.689 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:04.689 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:04.689 element at address: 0x200003a5ae40 with size: 0.000183 MiB 00:05:04.689 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:04.689 element at address: 0x200003a5f300 with size: 0.000183 MiB 00:05:04.689 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:05:04.689 element at address: 0x200003a7f680 with size: 0.000183 MiB 00:05:04.689 element at address: 0x200003aff940 with size: 0.000183 MiB 00:05:04.689 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:04.689 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:05:04.689 element at address: 0x200003eff000 with size: 0.000183 MiB 00:05:04.690 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:04.690 element at address: 0x20000707b600 with size: 0.000183 MiB 00:05:04.690 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:05:04.690 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:05:04.690 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:05:04.690 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:05:04.690 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:05:04.690 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:05:04.690 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:05:04.690 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:05:04.690 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:05:04.690 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:05:04.690 element at address: 0x20001d895380 with size: 0.000183 MiB 00:05:04.690 element at address: 0x20001d895440 with size: 0.000183 MiB 00:05:04.690 element at address: 0x20002ac68f80 with size: 0.000183 MiB 00:05:04.690 element at address: 0x20002ac69040 with size: 0.000183 MiB 00:05:04.690 element at address: 0x20002ac6fc40 with size: 0.000183 MiB 00:05:04.690 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:05:04.690 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:05:04.690 list of memzone associated elements. size: 646.796692 MiB 00:05:04.690 element at address: 0x20001d895500 with size: 211.416748 MiB 00:05:04.690 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:04.690 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:05:04.690 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:04.690 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:05:04.690 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_3271970_0 00:05:04.690 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:04.690 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3271970_0 00:05:04.690 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:04.690 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3271970_0 00:05:04.690 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:05:04.690 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3271970_0 00:05:04.690 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:05:04.690 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:04.690 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:05:04.690 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:04.690 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:04.690 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3271970 00:05:04.690 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:04.690 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3271970 00:05:04.690 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:04.690 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3271970 00:05:04.690 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:05:04.690 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:04.690 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:05:04.690 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:04.690 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:05:04.690 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:04.690 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:05:04.690 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:04.690 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:04.690 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3271970 00:05:04.690 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:04.690 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3271970 00:05:04.690 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:05:04.690 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3271970 00:05:04.690 element at address: 0x200034afe940 with size: 1.000488 MiB 00:05:04.690 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3271970 00:05:04.690 element at address: 0x200003a7f740 with size: 0.500488 MiB 00:05:04.690 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3271970 00:05:04.690 element at address: 0x200003e7ee00 with size: 0.500488 MiB 00:05:04.690 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3271970 00:05:04.690 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:05:04.690 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:04.690 element at address: 0x20000707b780 with size: 0.500488 MiB 00:05:04.690 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:04.690 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:05:04.690 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:04.690 element at address: 0x200003a5f3c0 with size: 0.125488 MiB 00:05:04.690 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3271970 00:05:04.690 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:05:04.690 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:04.690 element at address: 0x20002ac69100 with size: 0.023743 MiB 00:05:04.690 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:04.690 element at address: 0x200003a5b100 with size: 0.016113 MiB 00:05:04.690 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3271970 00:05:04.690 element at address: 0x20002ac6f240 with size: 0.002441 MiB 00:05:04.690 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:04.690 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:04.690 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3271970 00:05:04.690 element at address: 0x200003affa00 with size: 0.000305 MiB 00:05:04.690 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3271970 00:05:04.690 element at address: 0x200003a5af00 with size: 0.000305 MiB 00:05:04.690 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3271970 00:05:04.690 element at address: 0x20002ac6fd00 with size: 0.000305 MiB 00:05:04.690 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:04.690 18:10:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:04.690 18:10:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3271970 00:05:04.690 18:10:17 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 3271970 ']' 00:05:04.690 18:10:17 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 3271970 00:05:04.690 18:10:17 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:04.690 18:10:17 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:04.690 18:10:17 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3271970 00:05:04.690 18:10:17 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:04.690 18:10:17 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:04.690 18:10:17 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3271970' 00:05:04.690 killing process with pid 3271970 00:05:04.690 18:10:17 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 3271970 00:05:04.690 18:10:17 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 3271970 00:05:04.950 00:05:04.950 real 0m1.650s 00:05:04.950 user 0m1.668s 00:05:04.950 sys 0m0.541s 00:05:04.950 18:10:18 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:04.950 18:10:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:04.950 ************************************ 00:05:04.950 END TEST dpdk_mem_utility 00:05:04.950 ************************************ 00:05:04.950 18:10:18 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:04.950 18:10:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:04.950 18:10:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.210 18:10:18 -- common/autotest_common.sh@10 -- # set +x 00:05:05.210 ************************************ 00:05:05.210 START TEST event 00:05:05.210 ************************************ 00:05:05.210 18:10:18 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:05.210 * Looking for test storage... 00:05:05.210 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:05:05.210 18:10:18 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:05.210 18:10:18 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:05.210 18:10:18 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:05.210 18:10:18 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:05.210 18:10:18 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.210 18:10:18 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.210 18:10:18 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.210 18:10:18 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.210 18:10:18 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.210 18:10:18 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.210 18:10:18 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.210 18:10:18 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.210 18:10:18 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.210 18:10:18 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.210 18:10:18 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.210 18:10:18 event -- scripts/common.sh@344 -- # case "$op" in 00:05:05.210 18:10:18 event -- scripts/common.sh@345 -- # : 1 00:05:05.210 18:10:18 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.210 18:10:18 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.210 18:10:18 event -- scripts/common.sh@365 -- # decimal 1 00:05:05.210 18:10:18 event -- scripts/common.sh@353 -- # local d=1 00:05:05.210 18:10:18 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.210 18:10:18 event -- scripts/common.sh@355 -- # echo 1 00:05:05.210 18:10:18 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.210 18:10:18 event -- scripts/common.sh@366 -- # decimal 2 00:05:05.210 18:10:18 event -- scripts/common.sh@353 -- # local d=2 00:05:05.210 18:10:18 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.210 18:10:18 event -- scripts/common.sh@355 -- # echo 2 00:05:05.210 18:10:18 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.210 18:10:18 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.210 18:10:18 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.210 18:10:18 event -- scripts/common.sh@368 -- # return 0 00:05:05.210 18:10:18 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.210 18:10:18 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:05.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.210 --rc genhtml_branch_coverage=1 00:05:05.210 --rc genhtml_function_coverage=1 00:05:05.210 --rc genhtml_legend=1 00:05:05.210 --rc geninfo_all_blocks=1 00:05:05.210 --rc geninfo_unexecuted_blocks=1 00:05:05.210 00:05:05.210 ' 00:05:05.210 18:10:18 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:05.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.210 --rc genhtml_branch_coverage=1 00:05:05.210 --rc genhtml_function_coverage=1 00:05:05.210 --rc genhtml_legend=1 00:05:05.210 --rc geninfo_all_blocks=1 00:05:05.210 --rc geninfo_unexecuted_blocks=1 00:05:05.210 00:05:05.210 ' 00:05:05.210 18:10:18 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:05.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.210 --rc genhtml_branch_coverage=1 00:05:05.210 --rc genhtml_function_coverage=1 00:05:05.210 --rc genhtml_legend=1 00:05:05.210 --rc geninfo_all_blocks=1 00:05:05.210 --rc geninfo_unexecuted_blocks=1 00:05:05.210 00:05:05.210 ' 00:05:05.210 18:10:18 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:05.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.210 --rc genhtml_branch_coverage=1 00:05:05.210 --rc genhtml_function_coverage=1 00:05:05.210 --rc genhtml_legend=1 00:05:05.210 --rc geninfo_all_blocks=1 00:05:05.210 --rc geninfo_unexecuted_blocks=1 00:05:05.210 00:05:05.210 ' 00:05:05.210 18:10:18 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:05.210 18:10:18 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:05.210 18:10:18 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:05.210 18:10:18 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:05.210 18:10:18 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.210 18:10:18 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.469 ************************************ 00:05:05.469 START TEST event_perf 00:05:05.469 ************************************ 00:05:05.469 18:10:18 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:05.469 Running I/O for 1 seconds...[2024-10-08 18:10:18.420213] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:05:05.469 [2024-10-08 18:10:18.420322] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3272366 ] 00:05:05.469 [2024-10-08 18:10:18.506160] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:05.469 [2024-10-08 18:10:18.593971] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.469 [2024-10-08 18:10:18.594114] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:05:05.469 [2024-10-08 18:10:18.594113] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.469 [2024-10-08 18:10:18.594072] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:06.841 Running I/O for 1 seconds... 00:05:06.841 lcore 0: 208731 00:05:06.841 lcore 1: 208731 00:05:06.841 lcore 2: 208732 00:05:06.841 lcore 3: 208731 00:05:06.841 done. 00:05:06.841 00:05:06.841 real 0m1.273s 00:05:06.841 user 0m4.160s 00:05:06.841 sys 0m0.108s 00:05:06.841 18:10:19 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:06.841 18:10:19 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:06.841 ************************************ 00:05:06.841 END TEST event_perf 00:05:06.841 ************************************ 00:05:06.841 18:10:19 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:06.841 18:10:19 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:06.841 18:10:19 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.841 18:10:19 event -- common/autotest_common.sh@10 -- # set +x 00:05:06.841 ************************************ 00:05:06.841 START TEST event_reactor 00:05:06.841 ************************************ 00:05:06.841 18:10:19 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:06.841 [2024-10-08 18:10:19.768967] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:05:06.841 [2024-10-08 18:10:19.769057] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3272558 ] 00:05:06.841 [2024-10-08 18:10:19.856317] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.841 [2024-10-08 18:10:19.939583] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.290 test_start 00:05:08.290 oneshot 00:05:08.290 tick 100 00:05:08.290 tick 100 00:05:08.290 tick 250 00:05:08.290 tick 100 00:05:08.290 tick 100 00:05:08.290 tick 100 00:05:08.290 tick 250 00:05:08.290 tick 500 00:05:08.290 tick 100 00:05:08.290 tick 100 00:05:08.290 tick 250 00:05:08.290 tick 100 00:05:08.290 tick 100 00:05:08.290 test_end 00:05:08.290 00:05:08.290 real 0m1.276s 00:05:08.290 user 0m1.162s 00:05:08.290 sys 0m0.109s 00:05:08.290 18:10:21 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.290 18:10:21 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:08.290 ************************************ 00:05:08.290 END TEST event_reactor 00:05:08.290 ************************************ 00:05:08.290 18:10:21 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:08.290 18:10:21 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:08.290 18:10:21 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.290 18:10:21 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.290 ************************************ 00:05:08.290 START TEST event_reactor_perf 00:05:08.290 ************************************ 00:05:08.290 18:10:21 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:08.290 [2024-10-08 18:10:21.131928] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:05:08.290 [2024-10-08 18:10:21.132041] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3272751 ] 00:05:08.290 [2024-10-08 18:10:21.218673] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.290 [2024-10-08 18:10:21.302233] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.230 test_start 00:05:09.230 test_end 00:05:09.230 Performance: 519014 events per second 00:05:09.230 00:05:09.230 real 0m1.273s 00:05:09.230 user 0m1.168s 00:05:09.230 sys 0m0.099s 00:05:09.230 18:10:22 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:09.230 18:10:22 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:09.230 ************************************ 00:05:09.230 END TEST event_reactor_perf 00:05:09.230 ************************************ 00:05:09.490 18:10:22 event -- event/event.sh@49 -- # uname -s 00:05:09.490 18:10:22 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:09.490 18:10:22 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:09.490 18:10:22 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:09.490 18:10:22 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.490 18:10:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:09.490 ************************************ 00:05:09.490 START TEST event_scheduler 00:05:09.490 ************************************ 00:05:09.490 18:10:22 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:09.490 * Looking for test storage... 00:05:09.490 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:05:09.490 18:10:22 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:09.490 18:10:22 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:09.490 18:10:22 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:09.490 18:10:22 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:09.490 18:10:22 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.490 18:10:22 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.490 18:10:22 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.490 18:10:22 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.490 18:10:22 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.490 18:10:22 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.490 18:10:22 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.490 18:10:22 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.490 18:10:22 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.490 18:10:22 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.490 18:10:22 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.490 18:10:22 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:09.490 18:10:22 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:09.490 18:10:22 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.490 18:10:22 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.490 18:10:22 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:09.490 18:10:22 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:09.490 18:10:22 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.490 18:10:22 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:09.490 18:10:22 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.749 18:10:22 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:09.749 18:10:22 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:09.749 18:10:22 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.749 18:10:22 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:09.749 18:10:22 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.749 18:10:22 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.749 18:10:22 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.749 18:10:22 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:09.749 18:10:22 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.749 18:10:22 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:09.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.749 --rc genhtml_branch_coverage=1 00:05:09.749 --rc genhtml_function_coverage=1 00:05:09.749 --rc genhtml_legend=1 00:05:09.749 --rc geninfo_all_blocks=1 00:05:09.749 --rc geninfo_unexecuted_blocks=1 00:05:09.749 00:05:09.749 ' 00:05:09.749 18:10:22 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:09.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.749 --rc genhtml_branch_coverage=1 00:05:09.749 --rc genhtml_function_coverage=1 00:05:09.749 --rc genhtml_legend=1 00:05:09.749 --rc geninfo_all_blocks=1 00:05:09.749 --rc geninfo_unexecuted_blocks=1 00:05:09.749 00:05:09.749 ' 00:05:09.749 18:10:22 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:09.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.749 --rc genhtml_branch_coverage=1 00:05:09.749 --rc genhtml_function_coverage=1 00:05:09.749 --rc genhtml_legend=1 00:05:09.749 --rc geninfo_all_blocks=1 00:05:09.749 --rc geninfo_unexecuted_blocks=1 00:05:09.749 00:05:09.749 ' 00:05:09.749 18:10:22 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:09.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.749 --rc genhtml_branch_coverage=1 00:05:09.749 --rc genhtml_function_coverage=1 00:05:09.749 --rc genhtml_legend=1 00:05:09.749 --rc geninfo_all_blocks=1 00:05:09.750 --rc geninfo_unexecuted_blocks=1 00:05:09.750 00:05:09.750 ' 00:05:09.750 18:10:22 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:09.750 18:10:22 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3273037 00:05:09.750 18:10:22 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:09.750 18:10:22 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:09.750 18:10:22 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3273037 00:05:09.750 18:10:22 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 3273037 ']' 00:05:09.750 18:10:22 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.750 18:10:22 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:09.750 18:10:22 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.750 18:10:22 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:09.750 18:10:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.750 [2024-10-08 18:10:22.720533] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:05:09.750 [2024-10-08 18:10:22.720591] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3273037 ] 00:05:09.750 [2024-10-08 18:10:22.804046] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:09.750 [2024-10-08 18:10:22.893946] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.750 [2024-10-08 18:10:22.894051] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.750 [2024-10-08 18:10:22.894090] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:05:09.750 [2024-10-08 18:10:22.894104] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:10.684 18:10:23 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:10.684 18:10:23 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:10.684 18:10:23 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:10.684 18:10:23 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.684 18:10:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:10.684 [2024-10-08 18:10:23.584703] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:10.684 [2024-10-08 18:10:23.584726] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:10.684 [2024-10-08 18:10:23.584737] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:10.684 [2024-10-08 18:10:23.584745] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:10.684 [2024-10-08 18:10:23.584752] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:10.684 18:10:23 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.684 18:10:23 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:10.684 18:10:23 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.684 18:10:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:10.684 [2024-10-08 18:10:23.663876] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:10.684 18:10:23 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.684 18:10:23 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:10.684 18:10:23 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.684 18:10:23 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.684 18:10:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:10.684 ************************************ 00:05:10.684 START TEST scheduler_create_thread 00:05:10.684 ************************************ 00:05:10.684 18:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:10.684 18:10:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:10.684 18:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.684 18:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.684 2 00:05:10.684 18:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.684 18:10:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:10.684 18:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.685 18:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.685 3 00:05:10.685 18:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.685 18:10:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:10.685 18:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.685 18:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.685 4 00:05:10.685 18:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.685 18:10:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:10.685 18:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.685 18:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.685 5 00:05:10.685 18:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.685 18:10:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:10.685 18:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.685 18:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.685 6 00:05:10.685 18:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.685 18:10:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:10.685 18:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.685 18:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.685 7 00:05:10.685 18:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.685 18:10:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:10.685 18:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.685 18:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.685 8 00:05:10.685 18:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.685 18:10:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:10.685 18:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.685 18:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.252 9 00:05:11.252 18:10:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.252 18:10:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:11.252 18:10:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.252 18:10:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.628 10 00:05:12.628 18:10:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.628 18:10:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:12.628 18:10:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.628 18:10:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.194 18:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.194 18:10:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:13.194 18:10:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:13.194 18:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.194 18:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.128 18:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.128 18:10:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:14.128 18:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.128 18:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.694 18:10:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.694 18:10:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:14.694 18:10:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:14.694 18:10:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.694 18:10:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.264 18:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.264 00:05:15.264 real 0m4.469s 00:05:15.264 user 0m0.023s 00:05:15.264 sys 0m0.008s 00:05:15.264 18:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.264 18:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.264 ************************************ 00:05:15.264 END TEST scheduler_create_thread 00:05:15.264 ************************************ 00:05:15.264 18:10:28 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:15.264 18:10:28 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3273037 00:05:15.264 18:10:28 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 3273037 ']' 00:05:15.264 18:10:28 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 3273037 00:05:15.264 18:10:28 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:15.264 18:10:28 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:15.264 18:10:28 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3273037 00:05:15.264 18:10:28 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:15.264 18:10:28 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:15.264 18:10:28 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3273037' 00:05:15.264 killing process with pid 3273037 00:05:15.264 18:10:28 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 3273037 00:05:15.264 18:10:28 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 3273037 00:05:15.522 [2024-10-08 18:10:28.451788] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:15.779 00:05:15.779 real 0m6.235s 00:05:15.779 user 0m14.760s 00:05:15.779 sys 0m0.523s 00:05:15.779 18:10:28 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.779 18:10:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:15.779 ************************************ 00:05:15.779 END TEST event_scheduler 00:05:15.779 ************************************ 00:05:15.779 18:10:28 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:15.779 18:10:28 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:15.779 18:10:28 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:15.779 18:10:28 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.779 18:10:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:15.779 ************************************ 00:05:15.779 START TEST app_repeat 00:05:15.779 ************************************ 00:05:15.779 18:10:28 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:15.779 18:10:28 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.779 18:10:28 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.779 18:10:28 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:15.779 18:10:28 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.779 18:10:28 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:15.779 18:10:28 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:15.779 18:10:28 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:15.779 18:10:28 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3273831 00:05:15.779 18:10:28 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.779 18:10:28 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:15.779 18:10:28 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3273831' 00:05:15.779 Process app_repeat pid: 3273831 00:05:15.779 18:10:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:15.779 18:10:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:15.779 spdk_app_start Round 0 00:05:15.779 18:10:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3273831 /var/tmp/spdk-nbd.sock 00:05:15.779 18:10:28 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3273831 ']' 00:05:15.779 18:10:28 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:15.779 18:10:28 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:15.779 18:10:28 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:15.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:15.779 18:10:28 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:15.779 18:10:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:15.779 [2024-10-08 18:10:28.841223] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:05:15.779 [2024-10-08 18:10:28.841282] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3273831 ] 00:05:15.779 [2024-10-08 18:10:28.910519] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:16.037 [2024-10-08 18:10:29.004701] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.037 [2024-10-08 18:10:29.004702] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.602 18:10:29 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:16.602 18:10:29 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:16.602 18:10:29 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.860 Malloc0 00:05:16.860 18:10:29 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.117 Malloc1 00:05:17.117 18:10:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.117 18:10:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.117 18:10:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.117 18:10:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:17.117 18:10:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.117 18:10:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:17.117 18:10:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.117 18:10:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.117 18:10:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.117 18:10:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:17.117 18:10:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.117 18:10:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:17.117 18:10:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:17.117 18:10:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:17.117 18:10:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.117 18:10:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:17.375 /dev/nbd0 00:05:17.375 18:10:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:17.375 18:10:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:17.375 18:10:30 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:17.375 18:10:30 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:17.375 18:10:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:17.375 18:10:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:17.375 18:10:30 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:17.375 18:10:30 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:17.375 18:10:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:17.375 18:10:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:17.375 18:10:30 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.375 1+0 records in 00:05:17.375 1+0 records out 00:05:17.375 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214546 s, 19.1 MB/s 00:05:17.375 18:10:30 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:17.375 18:10:30 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:17.375 18:10:30 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:17.375 18:10:30 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:17.375 18:10:30 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:17.375 18:10:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.375 18:10:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.375 18:10:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:17.633 /dev/nbd1 00:05:17.633 18:10:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:17.633 18:10:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:17.633 18:10:30 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:17.633 18:10:30 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:17.633 18:10:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:17.633 18:10:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:17.633 18:10:30 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:17.633 18:10:30 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:17.633 18:10:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:17.633 18:10:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:17.633 18:10:30 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.633 1+0 records in 00:05:17.633 1+0 records out 00:05:17.633 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252059 s, 16.3 MB/s 00:05:17.633 18:10:30 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:17.633 18:10:30 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:17.633 18:10:30 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:17.633 18:10:30 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:17.633 18:10:30 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:17.633 18:10:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.633 18:10:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.633 18:10:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.633 18:10:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.633 18:10:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.892 18:10:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:17.892 { 00:05:17.892 "nbd_device": "/dev/nbd0", 00:05:17.892 "bdev_name": "Malloc0" 00:05:17.892 }, 00:05:17.892 { 00:05:17.892 "nbd_device": "/dev/nbd1", 00:05:17.892 "bdev_name": "Malloc1" 00:05:17.892 } 00:05:17.892 ]' 00:05:17.892 18:10:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:17.892 { 00:05:17.892 "nbd_device": "/dev/nbd0", 00:05:17.892 "bdev_name": "Malloc0" 00:05:17.892 }, 00:05:17.892 { 00:05:17.892 "nbd_device": "/dev/nbd1", 00:05:17.892 "bdev_name": "Malloc1" 00:05:17.892 } 00:05:17.892 ]' 00:05:17.892 18:10:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.892 18:10:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:17.892 /dev/nbd1' 00:05:17.892 18:10:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:17.892 /dev/nbd1' 00:05:17.892 18:10:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:17.892 18:10:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:17.892 18:10:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:17.892 18:10:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:17.892 18:10:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:17.892 18:10:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:17.892 18:10:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.892 18:10:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.892 18:10:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:17.892 18:10:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:17.892 18:10:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:17.892 18:10:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:17.892 256+0 records in 00:05:17.892 256+0 records out 00:05:17.892 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0112973 s, 92.8 MB/s 00:05:17.892 18:10:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.892 18:10:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:17.892 256+0 records in 00:05:17.892 256+0 records out 00:05:17.892 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0195556 s, 53.6 MB/s 00:05:17.892 18:10:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.892 18:10:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:17.892 256+0 records in 00:05:17.892 256+0 records out 00:05:17.892 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204876 s, 51.2 MB/s 00:05:17.892 18:10:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:17.892 18:10:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.892 18:10:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.892 18:10:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:17.892 18:10:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:17.892 18:10:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:17.892 18:10:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:17.892 18:10:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.892 18:10:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:17.892 18:10:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.892 18:10:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:17.892 18:10:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:17.892 18:10:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:17.892 18:10:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.892 18:10:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.892 18:10:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:17.892 18:10:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:17.892 18:10:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:17.892 18:10:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:18.151 18:10:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:18.151 18:10:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:18.151 18:10:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:18.151 18:10:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.151 18:10:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.151 18:10:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:18.151 18:10:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.151 18:10:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.151 18:10:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.151 18:10:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:18.408 18:10:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:18.408 18:10:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:18.408 18:10:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:18.408 18:10:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.408 18:10:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.408 18:10:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:18.408 18:10:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.408 18:10:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.408 18:10:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.408 18:10:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.408 18:10:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.666 18:10:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:18.666 18:10:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:18.666 18:10:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.666 18:10:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:18.666 18:10:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:18.666 18:10:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.666 18:10:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:18.666 18:10:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:18.666 18:10:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:18.666 18:10:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:18.666 18:10:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:18.666 18:10:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:18.666 18:10:31 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:18.924 18:10:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:19.181 [2024-10-08 18:10:32.115963] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:19.181 [2024-10-08 18:10:32.198954] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.181 [2024-10-08 18:10:32.198955] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.181 [2024-10-08 18:10:32.241510] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:19.181 [2024-10-08 18:10:32.241556] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:22.456 18:10:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:22.456 18:10:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:22.456 spdk_app_start Round 1 00:05:22.456 18:10:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3273831 /var/tmp/spdk-nbd.sock 00:05:22.456 18:10:34 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3273831 ']' 00:05:22.456 18:10:34 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:22.456 18:10:34 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:22.456 18:10:34 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:22.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:22.456 18:10:34 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:22.456 18:10:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:22.456 18:10:35 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:22.456 18:10:35 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:22.456 18:10:35 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.456 Malloc0 00:05:22.456 18:10:35 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.456 Malloc1 00:05:22.456 18:10:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.456 18:10:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.456 18:10:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.456 18:10:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:22.456 18:10:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.456 18:10:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:22.456 18:10:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.456 18:10:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.456 18:10:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.456 18:10:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:22.456 18:10:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.456 18:10:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:22.456 18:10:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:22.456 18:10:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:22.456 18:10:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.456 18:10:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:22.713 /dev/nbd0 00:05:22.713 18:10:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:22.713 18:10:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:22.713 18:10:35 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:22.713 18:10:35 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:22.713 18:10:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:22.713 18:10:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:22.713 18:10:35 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:22.713 18:10:35 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:22.713 18:10:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:22.713 18:10:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:22.713 18:10:35 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.713 1+0 records in 00:05:22.713 1+0 records out 00:05:22.713 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222856 s, 18.4 MB/s 00:05:22.713 18:10:35 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:22.713 18:10:35 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:22.713 18:10:35 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:22.713 18:10:35 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:22.713 18:10:35 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:22.713 18:10:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.713 18:10:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.713 18:10:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:22.971 /dev/nbd1 00:05:22.971 18:10:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:22.971 18:10:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:22.971 18:10:36 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:22.971 18:10:36 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:22.971 18:10:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:22.971 18:10:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:22.971 18:10:36 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:22.971 18:10:36 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:22.971 18:10:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:22.971 18:10:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:22.971 18:10:36 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.971 1+0 records in 00:05:22.971 1+0 records out 00:05:22.971 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284048 s, 14.4 MB/s 00:05:22.971 18:10:36 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:22.971 18:10:36 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:22.971 18:10:36 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:22.971 18:10:36 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:22.971 18:10:36 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:22.971 18:10:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.971 18:10:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.971 18:10:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.971 18:10:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.971 18:10:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.229 18:10:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:23.229 { 00:05:23.229 "nbd_device": "/dev/nbd0", 00:05:23.229 "bdev_name": "Malloc0" 00:05:23.229 }, 00:05:23.229 { 00:05:23.229 "nbd_device": "/dev/nbd1", 00:05:23.229 "bdev_name": "Malloc1" 00:05:23.229 } 00:05:23.229 ]' 00:05:23.229 18:10:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:23.229 { 00:05:23.229 "nbd_device": "/dev/nbd0", 00:05:23.229 "bdev_name": "Malloc0" 00:05:23.229 }, 00:05:23.229 { 00:05:23.229 "nbd_device": "/dev/nbd1", 00:05:23.229 "bdev_name": "Malloc1" 00:05:23.229 } 00:05:23.229 ]' 00:05:23.229 18:10:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.229 18:10:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:23.229 /dev/nbd1' 00:05:23.229 18:10:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:23.229 /dev/nbd1' 00:05:23.229 18:10:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.229 18:10:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:23.229 18:10:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:23.229 18:10:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:23.229 18:10:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:23.229 18:10:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:23.229 18:10:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.229 18:10:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:23.229 18:10:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:23.229 18:10:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:23.229 18:10:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:23.229 18:10:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:23.229 256+0 records in 00:05:23.229 256+0 records out 00:05:23.229 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107552 s, 97.5 MB/s 00:05:23.229 18:10:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:23.229 18:10:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:23.487 256+0 records in 00:05:23.487 256+0 records out 00:05:23.487 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198926 s, 52.7 MB/s 00:05:23.487 18:10:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:23.487 18:10:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:23.487 256+0 records in 00:05:23.487 256+0 records out 00:05:23.487 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021188 s, 49.5 MB/s 00:05:23.487 18:10:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:23.487 18:10:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.487 18:10:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:23.487 18:10:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:23.487 18:10:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:23.487 18:10:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:23.487 18:10:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:23.487 18:10:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:23.487 18:10:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:23.487 18:10:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:23.487 18:10:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:23.487 18:10:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:23.487 18:10:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:23.487 18:10:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.487 18:10:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.487 18:10:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:23.487 18:10:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:23.487 18:10:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.488 18:10:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:23.746 18:10:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:23.746 18:10:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:23.746 18:10:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:23.746 18:10:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.746 18:10:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.746 18:10:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:23.746 18:10:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:23.746 18:10:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.746 18:10:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.746 18:10:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:23.746 18:10:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:23.746 18:10:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:23.746 18:10:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:23.746 18:10:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.746 18:10:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.746 18:10:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:23.746 18:10:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:23.746 18:10:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.746 18:10:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.746 18:10:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.746 18:10:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.005 18:10:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:24.005 18:10:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:24.005 18:10:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.005 18:10:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:24.005 18:10:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.005 18:10:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:24.005 18:10:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:24.005 18:10:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:24.005 18:10:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:24.005 18:10:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:24.005 18:10:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:24.005 18:10:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:24.005 18:10:37 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:24.263 18:10:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:24.522 [2024-10-08 18:10:37.559065] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.522 [2024-10-08 18:10:37.641319] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.522 [2024-10-08 18:10:37.641319] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.522 [2024-10-08 18:10:37.690078] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:24.522 [2024-10-08 18:10:37.690127] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:27.804 18:10:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:27.804 18:10:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:27.804 spdk_app_start Round 2 00:05:27.804 18:10:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3273831 /var/tmp/spdk-nbd.sock 00:05:27.804 18:10:40 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3273831 ']' 00:05:27.804 18:10:40 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:27.804 18:10:40 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:27.804 18:10:40 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:27.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:27.804 18:10:40 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:27.804 18:10:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:27.804 18:10:40 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:27.804 18:10:40 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:27.804 18:10:40 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.804 Malloc0 00:05:27.804 18:10:40 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:28.069 Malloc1 00:05:28.069 18:10:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:28.069 18:10:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.069 18:10:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.069 18:10:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:28.069 18:10:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.069 18:10:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:28.069 18:10:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:28.069 18:10:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.069 18:10:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.069 18:10:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:28.069 18:10:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.069 18:10:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:28.069 18:10:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:28.069 18:10:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:28.069 18:10:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.069 18:10:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:28.069 /dev/nbd0 00:05:28.327 18:10:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:28.327 18:10:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:28.327 18:10:41 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:28.327 18:10:41 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:28.327 18:10:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:28.327 18:10:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:28.327 18:10:41 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:28.327 18:10:41 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:28.327 18:10:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:28.327 18:10:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:28.327 18:10:41 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:28.327 1+0 records in 00:05:28.327 1+0 records out 00:05:28.327 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229598 s, 17.8 MB/s 00:05:28.327 18:10:41 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:28.327 18:10:41 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:28.327 18:10:41 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:28.327 18:10:41 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:28.327 18:10:41 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:28.327 18:10:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:28.327 18:10:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.327 18:10:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:28.327 /dev/nbd1 00:05:28.585 18:10:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:28.585 18:10:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:28.585 18:10:41 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:28.585 18:10:41 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:28.585 18:10:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:28.585 18:10:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:28.585 18:10:41 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:28.585 18:10:41 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:28.585 18:10:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:28.585 18:10:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:28.585 18:10:41 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:28.585 1+0 records in 00:05:28.585 1+0 records out 00:05:28.585 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199741 s, 20.5 MB/s 00:05:28.585 18:10:41 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:28.585 18:10:41 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:28.585 18:10:41 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:28.585 18:10:41 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:28.585 18:10:41 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:28.585 18:10:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:28.585 18:10:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.585 18:10:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.585 18:10:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.585 18:10:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.843 18:10:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:28.843 { 00:05:28.843 "nbd_device": "/dev/nbd0", 00:05:28.843 "bdev_name": "Malloc0" 00:05:28.843 }, 00:05:28.843 { 00:05:28.843 "nbd_device": "/dev/nbd1", 00:05:28.843 "bdev_name": "Malloc1" 00:05:28.843 } 00:05:28.843 ]' 00:05:28.843 18:10:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:28.843 { 00:05:28.843 "nbd_device": "/dev/nbd0", 00:05:28.843 "bdev_name": "Malloc0" 00:05:28.843 }, 00:05:28.843 { 00:05:28.843 "nbd_device": "/dev/nbd1", 00:05:28.843 "bdev_name": "Malloc1" 00:05:28.843 } 00:05:28.843 ]' 00:05:28.843 18:10:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.843 18:10:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:28.843 /dev/nbd1' 00:05:28.843 18:10:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:28.843 /dev/nbd1' 00:05:28.843 18:10:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.843 18:10:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:28.843 18:10:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:28.843 18:10:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:28.843 18:10:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:28.843 18:10:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:28.843 18:10:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.843 18:10:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.843 18:10:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:28.843 18:10:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:28.843 18:10:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:28.843 18:10:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:28.843 256+0 records in 00:05:28.843 256+0 records out 00:05:28.844 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103981 s, 101 MB/s 00:05:28.844 18:10:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.844 18:10:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:28.844 256+0 records in 00:05:28.844 256+0 records out 00:05:28.844 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0194448 s, 53.9 MB/s 00:05:28.844 18:10:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.844 18:10:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:28.844 256+0 records in 00:05:28.844 256+0 records out 00:05:28.844 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205902 s, 50.9 MB/s 00:05:28.844 18:10:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:28.844 18:10:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.844 18:10:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.844 18:10:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:28.844 18:10:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:28.844 18:10:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:28.844 18:10:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:28.844 18:10:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.844 18:10:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:28.844 18:10:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.844 18:10:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:28.844 18:10:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:28.844 18:10:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:28.844 18:10:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.844 18:10:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.844 18:10:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:28.844 18:10:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:28.844 18:10:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.844 18:10:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:29.102 18:10:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:29.102 18:10:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:29.102 18:10:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:29.102 18:10:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.102 18:10:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.102 18:10:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:29.102 18:10:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:29.102 18:10:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.102 18:10:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:29.102 18:10:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:29.360 18:10:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:29.360 18:10:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:29.360 18:10:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:29.360 18:10:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.360 18:10:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.360 18:10:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:29.360 18:10:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:29.360 18:10:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.360 18:10:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:29.360 18:10:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.360 18:10:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:29.619 18:10:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:29.619 18:10:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:29.619 18:10:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:29.619 18:10:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:29.619 18:10:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:29.619 18:10:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:29.619 18:10:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:29.619 18:10:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:29.619 18:10:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:29.619 18:10:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:29.619 18:10:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:29.619 18:10:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:29.619 18:10:42 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:29.877 18:10:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:29.877 [2024-10-08 18:10:43.017276] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:30.136 [2024-10-08 18:10:43.101131] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.136 [2024-10-08 18:10:43.101132] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.136 [2024-10-08 18:10:43.148970] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:30.136 [2024-10-08 18:10:43.149023] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:32.674 18:10:45 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3273831 /var/tmp/spdk-nbd.sock 00:05:32.674 18:10:45 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3273831 ']' 00:05:32.674 18:10:45 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:32.674 18:10:45 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.674 18:10:45 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:32.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:32.674 18:10:45 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.674 18:10:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:32.933 18:10:46 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.933 18:10:46 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:32.933 18:10:46 event.app_repeat -- event/event.sh@39 -- # killprocess 3273831 00:05:32.933 18:10:46 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 3273831 ']' 00:05:32.933 18:10:46 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 3273831 00:05:32.933 18:10:46 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:32.933 18:10:46 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:32.933 18:10:46 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3273831 00:05:33.193 18:10:46 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:33.193 18:10:46 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:33.193 18:10:46 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3273831' 00:05:33.193 killing process with pid 3273831 00:05:33.193 18:10:46 event.app_repeat -- common/autotest_common.sh@969 -- # kill 3273831 00:05:33.193 18:10:46 event.app_repeat -- common/autotest_common.sh@974 -- # wait 3273831 00:05:33.193 spdk_app_start is called in Round 0. 00:05:33.193 Shutdown signal received, stop current app iteration 00:05:33.193 Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 reinitialization... 00:05:33.193 spdk_app_start is called in Round 1. 00:05:33.193 Shutdown signal received, stop current app iteration 00:05:33.193 Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 reinitialization... 00:05:33.193 spdk_app_start is called in Round 2. 00:05:33.193 Shutdown signal received, stop current app iteration 00:05:33.193 Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 reinitialization... 00:05:33.193 spdk_app_start is called in Round 3. 00:05:33.193 Shutdown signal received, stop current app iteration 00:05:33.193 18:10:46 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:33.193 18:10:46 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:33.193 00:05:33.193 real 0m17.486s 00:05:33.193 user 0m37.606s 00:05:33.193 sys 0m3.258s 00:05:33.193 18:10:46 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.193 18:10:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:33.193 ************************************ 00:05:33.193 END TEST app_repeat 00:05:33.193 ************************************ 00:05:33.193 18:10:46 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:33.193 18:10:46 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:33.193 18:10:46 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.193 18:10:46 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.193 18:10:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:33.453 ************************************ 00:05:33.453 START TEST cpu_locks 00:05:33.454 ************************************ 00:05:33.454 18:10:46 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:33.454 * Looking for test storage... 00:05:33.454 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:05:33.454 18:10:46 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:33.454 18:10:46 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:05:33.454 18:10:46 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:33.454 18:10:46 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:33.454 18:10:46 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.454 18:10:46 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.454 18:10:46 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.454 18:10:46 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.454 18:10:46 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.454 18:10:46 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.454 18:10:46 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.454 18:10:46 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.454 18:10:46 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.454 18:10:46 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.454 18:10:46 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.454 18:10:46 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:33.454 18:10:46 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:33.454 18:10:46 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.454 18:10:46 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.454 18:10:46 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:33.454 18:10:46 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:33.454 18:10:46 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.454 18:10:46 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:33.454 18:10:46 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.454 18:10:46 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:33.454 18:10:46 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:33.454 18:10:46 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.454 18:10:46 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:33.454 18:10:46 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.454 18:10:46 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.454 18:10:46 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.454 18:10:46 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:33.454 18:10:46 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.454 18:10:46 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:33.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.454 --rc genhtml_branch_coverage=1 00:05:33.454 --rc genhtml_function_coverage=1 00:05:33.454 --rc genhtml_legend=1 00:05:33.454 --rc geninfo_all_blocks=1 00:05:33.454 --rc geninfo_unexecuted_blocks=1 00:05:33.454 00:05:33.454 ' 00:05:33.454 18:10:46 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:33.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.454 --rc genhtml_branch_coverage=1 00:05:33.454 --rc genhtml_function_coverage=1 00:05:33.454 --rc genhtml_legend=1 00:05:33.454 --rc geninfo_all_blocks=1 00:05:33.454 --rc geninfo_unexecuted_blocks=1 00:05:33.454 00:05:33.454 ' 00:05:33.454 18:10:46 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:33.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.454 --rc genhtml_branch_coverage=1 00:05:33.454 --rc genhtml_function_coverage=1 00:05:33.454 --rc genhtml_legend=1 00:05:33.454 --rc geninfo_all_blocks=1 00:05:33.454 --rc geninfo_unexecuted_blocks=1 00:05:33.454 00:05:33.454 ' 00:05:33.454 18:10:46 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:33.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.454 --rc genhtml_branch_coverage=1 00:05:33.454 --rc genhtml_function_coverage=1 00:05:33.454 --rc genhtml_legend=1 00:05:33.454 --rc geninfo_all_blocks=1 00:05:33.454 --rc geninfo_unexecuted_blocks=1 00:05:33.454 00:05:33.454 ' 00:05:33.454 18:10:46 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:33.454 18:10:46 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:33.454 18:10:46 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:33.454 18:10:46 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:33.454 18:10:46 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.454 18:10:46 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.454 18:10:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.454 ************************************ 00:05:33.454 START TEST default_locks 00:05:33.454 ************************************ 00:05:33.454 18:10:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:33.454 18:10:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3276431 00:05:33.454 18:10:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3276431 00:05:33.454 18:10:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:33.454 18:10:46 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 3276431 ']' 00:05:33.454 18:10:46 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.454 18:10:46 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:33.454 18:10:46 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.454 18:10:46 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:33.454 18:10:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.714 [2024-10-08 18:10:46.651557] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:05:33.714 [2024-10-08 18:10:46.651612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3276431 ] 00:05:33.714 [2024-10-08 18:10:46.734222] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.714 [2024-10-08 18:10:46.816291] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.652 18:10:47 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:34.652 18:10:47 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:34.652 18:10:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3276431 00:05:34.652 18:10:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3276431 00:05:34.652 18:10:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:35.221 lslocks: write error 00:05:35.221 18:10:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3276431 00:05:35.221 18:10:48 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 3276431 ']' 00:05:35.221 18:10:48 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 3276431 00:05:35.221 18:10:48 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:35.221 18:10:48 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:35.221 18:10:48 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3276431 00:05:35.221 18:10:48 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:35.221 18:10:48 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:35.221 18:10:48 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3276431' 00:05:35.221 killing process with pid 3276431 00:05:35.221 18:10:48 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 3276431 00:05:35.221 18:10:48 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 3276431 00:05:35.480 18:10:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3276431 00:05:35.480 18:10:48 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:35.480 18:10:48 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3276431 00:05:35.480 18:10:48 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:35.740 18:10:48 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.740 18:10:48 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:35.740 18:10:48 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.740 18:10:48 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 3276431 00:05:35.740 18:10:48 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 3276431 ']' 00:05:35.740 18:10:48 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.740 18:10:48 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:35.740 18:10:48 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.740 18:10:48 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:35.740 18:10:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.740 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3276431) - No such process 00:05:35.740 ERROR: process (pid: 3276431) is no longer running 00:05:35.740 18:10:48 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:35.740 18:10:48 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:35.740 18:10:48 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:35.740 18:10:48 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:35.740 18:10:48 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:35.740 18:10:48 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:35.740 18:10:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:35.740 18:10:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:35.740 18:10:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:35.740 18:10:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:35.740 00:05:35.740 real 0m2.062s 00:05:35.740 user 0m2.152s 00:05:35.740 sys 0m0.813s 00:05:35.740 18:10:48 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.740 18:10:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.740 ************************************ 00:05:35.740 END TEST default_locks 00:05:35.740 ************************************ 00:05:35.740 18:10:48 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:35.740 18:10:48 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.740 18:10:48 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.740 18:10:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.740 ************************************ 00:05:35.740 START TEST default_locks_via_rpc 00:05:35.740 ************************************ 00:05:35.740 18:10:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:35.740 18:10:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3276804 00:05:35.740 18:10:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3276804 00:05:35.740 18:10:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:35.740 18:10:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3276804 ']' 00:05:35.740 18:10:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.740 18:10:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:35.740 18:10:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.740 18:10:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:35.740 18:10:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.740 [2024-10-08 18:10:48.800915] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:05:35.740 [2024-10-08 18:10:48.800974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3276804 ] 00:05:35.740 [2024-10-08 18:10:48.882393] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.999 [2024-10-08 18:10:48.972348] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.568 18:10:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:36.568 18:10:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:36.568 18:10:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:36.568 18:10:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.568 18:10:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.568 18:10:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.568 18:10:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:36.568 18:10:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:36.568 18:10:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:36.568 18:10:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:36.568 18:10:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:36.568 18:10:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.568 18:10:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.568 18:10:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.568 18:10:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3276804 00:05:36.568 18:10:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3276804 00:05:36.568 18:10:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:37.138 18:10:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3276804 00:05:37.138 18:10:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 3276804 ']' 00:05:37.139 18:10:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 3276804 00:05:37.139 18:10:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:37.139 18:10:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:37.139 18:10:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3276804 00:05:37.139 18:10:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:37.139 18:10:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:37.139 18:10:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3276804' 00:05:37.139 killing process with pid 3276804 00:05:37.139 18:10:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 3276804 00:05:37.139 18:10:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 3276804 00:05:37.707 00:05:37.707 real 0m1.858s 00:05:37.707 user 0m1.932s 00:05:37.707 sys 0m0.664s 00:05:37.707 18:10:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.707 18:10:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.707 ************************************ 00:05:37.707 END TEST default_locks_via_rpc 00:05:37.707 ************************************ 00:05:37.707 18:10:50 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:37.707 18:10:50 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.707 18:10:50 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.707 18:10:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.707 ************************************ 00:05:37.707 START TEST non_locking_app_on_locked_coremask 00:05:37.708 ************************************ 00:05:37.708 18:10:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:37.708 18:10:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3277040 00:05:37.708 18:10:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3277040 /var/tmp/spdk.sock 00:05:37.708 18:10:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:37.708 18:10:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3277040 ']' 00:05:37.708 18:10:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.708 18:10:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:37.708 18:10:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.708 18:10:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:37.708 18:10:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.708 [2024-10-08 18:10:50.744536] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:05:37.708 [2024-10-08 18:10:50.744588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3277040 ] 00:05:37.708 [2024-10-08 18:10:50.828623] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.967 [2024-10-08 18:10:50.919041] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.536 18:10:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:38.536 18:10:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:38.536 18:10:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:38.536 18:10:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3277218 00:05:38.536 18:10:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3277218 /var/tmp/spdk2.sock 00:05:38.536 18:10:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3277218 ']' 00:05:38.537 18:10:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:38.537 18:10:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:38.537 18:10:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:38.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:38.537 18:10:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:38.537 18:10:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.537 [2024-10-08 18:10:51.620512] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:05:38.537 [2024-10-08 18:10:51.620567] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3277218 ] 00:05:38.796 [2024-10-08 18:10:51.716156] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:38.796 [2024-10-08 18:10:51.716181] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.796 [2024-10-08 18:10:51.881327] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.365 18:10:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:39.365 18:10:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:39.365 18:10:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3277040 00:05:39.365 18:10:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3277040 00:05:39.365 18:10:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:41.332 lslocks: write error 00:05:41.332 18:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3277040 00:05:41.332 18:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3277040 ']' 00:05:41.332 18:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3277040 00:05:41.332 18:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:41.332 18:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:41.332 18:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3277040 00:05:41.332 18:10:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:41.332 18:10:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:41.332 18:10:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3277040' 00:05:41.332 killing process with pid 3277040 00:05:41.332 18:10:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3277040 00:05:41.332 18:10:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3277040 00:05:41.597 18:10:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3277218 00:05:41.597 18:10:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3277218 ']' 00:05:41.597 18:10:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3277218 00:05:41.597 18:10:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:41.597 18:10:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:41.597 18:10:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3277218 00:05:41.856 18:10:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:41.856 18:10:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:41.856 18:10:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3277218' 00:05:41.856 killing process with pid 3277218 00:05:41.856 18:10:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3277218 00:05:41.856 18:10:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3277218 00:05:42.116 00:05:42.116 real 0m4.505s 00:05:42.116 user 0m4.752s 00:05:42.116 sys 0m1.545s 00:05:42.116 18:10:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.116 18:10:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.116 ************************************ 00:05:42.116 END TEST non_locking_app_on_locked_coremask 00:05:42.116 ************************************ 00:05:42.116 18:10:55 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:42.116 18:10:55 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.116 18:10:55 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.116 18:10:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.116 ************************************ 00:05:42.116 START TEST locking_app_on_unlocked_coremask 00:05:42.116 ************************************ 00:05:42.116 18:10:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:42.116 18:10:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3277635 00:05:42.116 18:10:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3277635 /var/tmp/spdk.sock 00:05:42.116 18:10:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:42.116 18:10:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3277635 ']' 00:05:42.116 18:10:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.116 18:10:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:42.116 18:10:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.116 18:10:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:42.116 18:10:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.375 [2024-10-08 18:10:55.330410] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:05:42.375 [2024-10-08 18:10:55.330463] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3277635 ] 00:05:42.375 [2024-10-08 18:10:55.416020] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:42.375 [2024-10-08 18:10:55.416051] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.375 [2024-10-08 18:10:55.507184] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.314 18:10:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:43.314 18:10:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:43.314 18:10:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:43.314 18:10:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3277813 00:05:43.314 18:10:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3277813 /var/tmp/spdk2.sock 00:05:43.314 18:10:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3277813 ']' 00:05:43.314 18:10:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:43.314 18:10:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:43.314 18:10:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:43.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:43.314 18:10:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:43.314 18:10:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.314 [2024-10-08 18:10:56.201388] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:05:43.314 [2024-10-08 18:10:56.201445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3277813 ] 00:05:43.314 [2024-10-08 18:10:56.296201] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.314 [2024-10-08 18:10:56.457807] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.252 18:10:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:44.252 18:10:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:44.252 18:10:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3277813 00:05:44.252 18:10:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3277813 00:05:44.252 18:10:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:45.631 lslocks: write error 00:05:45.631 18:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3277635 00:05:45.631 18:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3277635 ']' 00:05:45.631 18:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 3277635 00:05:45.631 18:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:45.631 18:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:45.631 18:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3277635 00:05:45.631 18:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:45.631 18:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:45.631 18:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3277635' 00:05:45.631 killing process with pid 3277635 00:05:45.631 18:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 3277635 00:05:45.631 18:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 3277635 00:05:46.200 18:10:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3277813 00:05:46.200 18:10:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3277813 ']' 00:05:46.200 18:10:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 3277813 00:05:46.200 18:10:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:46.200 18:10:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:46.200 18:10:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3277813 00:05:46.200 18:10:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:46.200 18:10:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:46.200 18:10:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3277813' 00:05:46.200 killing process with pid 3277813 00:05:46.200 18:10:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 3277813 00:05:46.200 18:10:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 3277813 00:05:46.768 00:05:46.768 real 0m4.355s 00:05:46.768 user 0m4.625s 00:05:46.768 sys 0m1.476s 00:05:46.768 18:10:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.768 18:10:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.768 ************************************ 00:05:46.768 END TEST locking_app_on_unlocked_coremask 00:05:46.768 ************************************ 00:05:46.768 18:10:59 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:46.768 18:10:59 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.768 18:10:59 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.768 18:10:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.768 ************************************ 00:05:46.768 START TEST locking_app_on_locked_coremask 00:05:46.768 ************************************ 00:05:46.768 18:10:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:46.768 18:10:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3278330 00:05:46.768 18:10:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3278330 /var/tmp/spdk.sock 00:05:46.768 18:10:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.768 18:10:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3278330 ']' 00:05:46.768 18:10:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.768 18:10:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:46.768 18:10:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.768 18:10:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:46.768 18:10:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.768 [2024-10-08 18:10:59.775411] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:05:46.768 [2024-10-08 18:10:59.775467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3278330 ] 00:05:46.768 [2024-10-08 18:10:59.862831] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.028 [2024-10-08 18:10:59.951050] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.600 18:11:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:47.600 18:11:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:47.601 18:11:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3278404 00:05:47.601 18:11:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3278404 /var/tmp/spdk2.sock 00:05:47.601 18:11:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:47.601 18:11:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:47.601 18:11:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3278404 /var/tmp/spdk2.sock 00:05:47.601 18:11:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:47.601 18:11:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.601 18:11:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:47.601 18:11:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.601 18:11:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3278404 /var/tmp/spdk2.sock 00:05:47.601 18:11:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3278404 ']' 00:05:47.601 18:11:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.601 18:11:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:47.601 18:11:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.601 18:11:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:47.601 18:11:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.601 [2024-10-08 18:11:00.667286] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:05:47.601 [2024-10-08 18:11:00.667343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3278404 ] 00:05:47.601 [2024-10-08 18:11:00.763549] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3278330 has claimed it. 00:05:47.601 [2024-10-08 18:11:00.763593] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:48.169 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3278404) - No such process 00:05:48.169 ERROR: process (pid: 3278404) is no longer running 00:05:48.169 18:11:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:48.170 18:11:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:48.170 18:11:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:48.170 18:11:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:48.170 18:11:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:48.170 18:11:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:48.170 18:11:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3278330 00:05:48.170 18:11:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3278330 00:05:48.170 18:11:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:48.738 lslocks: write error 00:05:48.738 18:11:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3278330 00:05:48.738 18:11:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3278330 ']' 00:05:48.738 18:11:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3278330 00:05:48.738 18:11:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:48.738 18:11:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:48.738 18:11:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3278330 00:05:48.738 18:11:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:48.738 18:11:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:48.738 18:11:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3278330' 00:05:48.738 killing process with pid 3278330 00:05:48.738 18:11:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3278330 00:05:48.738 18:11:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3278330 00:05:48.998 00:05:48.998 real 0m2.325s 00:05:48.998 user 0m2.554s 00:05:48.998 sys 0m0.710s 00:05:48.998 18:11:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.998 18:11:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.998 ************************************ 00:05:48.998 END TEST locking_app_on_locked_coremask 00:05:48.998 ************************************ 00:05:48.998 18:11:02 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:48.998 18:11:02 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.998 18:11:02 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.998 18:11:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.998 ************************************ 00:05:48.998 START TEST locking_overlapped_coremask 00:05:48.998 ************************************ 00:05:48.998 18:11:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:48.998 18:11:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3278617 00:05:48.998 18:11:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3278617 /var/tmp/spdk.sock 00:05:48.998 18:11:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:48.998 18:11:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 3278617 ']' 00:05:48.998 18:11:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.998 18:11:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.998 18:11:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.998 18:11:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.998 18:11:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.257 [2024-10-08 18:11:02.185270] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:05:49.257 [2024-10-08 18:11:02.185328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3278617 ] 00:05:49.257 [2024-10-08 18:11:02.260436] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:49.257 [2024-10-08 18:11:02.366747] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.257 [2024-10-08 18:11:02.366844] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.257 [2024-10-08 18:11:02.366845] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.196 18:11:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.196 18:11:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:50.196 18:11:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3278803 00:05:50.196 18:11:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3278803 /var/tmp/spdk2.sock 00:05:50.196 18:11:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:50.196 18:11:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:50.196 18:11:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3278803 /var/tmp/spdk2.sock 00:05:50.196 18:11:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:50.196 18:11:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:50.196 18:11:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:50.196 18:11:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:50.196 18:11:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3278803 /var/tmp/spdk2.sock 00:05:50.196 18:11:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 3278803 ']' 00:05:50.196 18:11:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.196 18:11:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.196 18:11:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.196 18:11:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.196 18:11:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.196 [2024-10-08 18:11:03.081777] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:05:50.196 [2024-10-08 18:11:03.081835] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3278803 ] 00:05:50.196 [2024-10-08 18:11:03.182457] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3278617 has claimed it. 00:05:50.196 [2024-10-08 18:11:03.182500] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:50.766 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3278803) - No such process 00:05:50.766 ERROR: process (pid: 3278803) is no longer running 00:05:50.766 18:11:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.766 18:11:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:50.766 18:11:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:50.766 18:11:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:50.766 18:11:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:50.766 18:11:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:50.766 18:11:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:50.766 18:11:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:50.766 18:11:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:50.766 18:11:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:50.766 18:11:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3278617 00:05:50.766 18:11:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 3278617 ']' 00:05:50.766 18:11:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 3278617 00:05:50.766 18:11:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:50.766 18:11:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:50.766 18:11:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3278617 00:05:50.766 18:11:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:50.766 18:11:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:50.766 18:11:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3278617' 00:05:50.766 killing process with pid 3278617 00:05:50.766 18:11:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 3278617 00:05:50.766 18:11:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 3278617 00:05:51.025 00:05:51.025 real 0m2.043s 00:05:51.025 user 0m5.636s 00:05:51.025 sys 0m0.502s 00:05:51.025 18:11:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.025 18:11:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.025 ************************************ 00:05:51.025 END TEST locking_overlapped_coremask 00:05:51.025 ************************************ 00:05:51.285 18:11:04 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:51.285 18:11:04 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.285 18:11:04 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.285 18:11:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.285 ************************************ 00:05:51.285 START TEST locking_overlapped_coremask_via_rpc 00:05:51.285 ************************************ 00:05:51.285 18:11:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:51.285 18:11:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3279011 00:05:51.285 18:11:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3279011 /var/tmp/spdk.sock 00:05:51.285 18:11:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:51.285 18:11:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3279011 ']' 00:05:51.285 18:11:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.285 18:11:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.285 18:11:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.285 18:11:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.285 18:11:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.285 [2024-10-08 18:11:04.308324] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:05:51.285 [2024-10-08 18:11:04.308382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3279011 ] 00:05:51.285 [2024-10-08 18:11:04.393616] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:51.285 [2024-10-08 18:11:04.393651] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:51.544 [2024-10-08 18:11:04.482936] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.544 [2024-10-08 18:11:04.483083] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.544 [2024-10-08 18:11:04.483083] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.113 18:11:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.113 18:11:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:52.113 18:11:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3279098 00:05:52.113 18:11:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3279098 /var/tmp/spdk2.sock 00:05:52.113 18:11:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:52.113 18:11:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3279098 ']' 00:05:52.113 18:11:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.113 18:11:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.113 18:11:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.113 18:11:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.113 18:11:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.113 [2024-10-08 18:11:05.209760] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:05:52.113 [2024-10-08 18:11:05.209822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3279098 ] 00:05:52.373 [2024-10-08 18:11:05.312764] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:52.373 [2024-10-08 18:11:05.312800] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:52.373 [2024-10-08 18:11:05.475789] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:05:52.373 [2024-10-08 18:11:05.479049] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.373 [2024-10-08 18:11:05.479050] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:05:52.942 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.942 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:52.942 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:52.942 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.942 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.942 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:52.942 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:52.942 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:52.942 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:52.942 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:52.942 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.942 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:52.942 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.942 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:52.942 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.942 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.942 [2024-10-08 18:11:06.081070] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3279011 has claimed it. 00:05:52.942 request: 00:05:52.942 { 00:05:52.942 "method": "framework_enable_cpumask_locks", 00:05:52.942 "req_id": 1 00:05:52.942 } 00:05:52.942 Got JSON-RPC error response 00:05:52.942 response: 00:05:52.942 { 00:05:52.942 "code": -32603, 00:05:52.942 "message": "Failed to claim CPU core: 2" 00:05:52.942 } 00:05:52.942 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:52.942 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:52.942 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:52.942 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:52.942 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:52.942 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3279011 /var/tmp/spdk.sock 00:05:52.942 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3279011 ']' 00:05:52.942 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.942 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.942 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.942 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.942 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.201 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.201 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:53.201 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3279098 /var/tmp/spdk2.sock 00:05:53.201 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3279098 ']' 00:05:53.201 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.201 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.201 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.201 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.201 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.461 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.461 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:53.461 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:53.461 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:53.461 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:53.461 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:53.461 00:05:53.461 real 0m2.252s 00:05:53.461 user 0m0.969s 00:05:53.461 sys 0m0.216s 00:05:53.461 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.461 18:11:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.461 ************************************ 00:05:53.461 END TEST locking_overlapped_coremask_via_rpc 00:05:53.461 ************************************ 00:05:53.461 18:11:06 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:53.461 18:11:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3279011 ]] 00:05:53.461 18:11:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3279011 00:05:53.461 18:11:06 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3279011 ']' 00:05:53.461 18:11:06 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3279011 00:05:53.461 18:11:06 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:53.461 18:11:06 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:53.461 18:11:06 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3279011 00:05:53.461 18:11:06 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:53.461 18:11:06 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:53.461 18:11:06 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3279011' 00:05:53.461 killing process with pid 3279011 00:05:53.461 18:11:06 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 3279011 00:05:53.461 18:11:06 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 3279011 00:05:54.030 18:11:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3279098 ]] 00:05:54.030 18:11:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3279098 00:05:54.030 18:11:06 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3279098 ']' 00:05:54.030 18:11:06 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3279098 00:05:54.030 18:11:06 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:54.030 18:11:06 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:54.030 18:11:06 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3279098 00:05:54.030 18:11:07 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:54.030 18:11:07 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:54.030 18:11:07 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3279098' 00:05:54.030 killing process with pid 3279098 00:05:54.030 18:11:07 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 3279098 00:05:54.030 18:11:07 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 3279098 00:05:54.289 18:11:07 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:54.289 18:11:07 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:54.289 18:11:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3279011 ]] 00:05:54.289 18:11:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3279011 00:05:54.289 18:11:07 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3279011 ']' 00:05:54.289 18:11:07 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3279011 00:05:54.290 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3279011) - No such process 00:05:54.290 18:11:07 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 3279011 is not found' 00:05:54.290 Process with pid 3279011 is not found 00:05:54.290 18:11:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3279098 ]] 00:05:54.290 18:11:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3279098 00:05:54.290 18:11:07 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3279098 ']' 00:05:54.290 18:11:07 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3279098 00:05:54.290 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3279098) - No such process 00:05:54.290 18:11:07 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 3279098 is not found' 00:05:54.290 Process with pid 3279098 is not found 00:05:54.290 18:11:07 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:54.290 00:05:54.290 real 0m21.064s 00:05:54.290 user 0m34.227s 00:05:54.290 sys 0m7.101s 00:05:54.290 18:11:07 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.290 18:11:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.290 ************************************ 00:05:54.290 END TEST cpu_locks 00:05:54.290 ************************************ 00:05:54.550 00:05:54.550 real 0m49.322s 00:05:54.550 user 1m33.360s 00:05:54.550 sys 0m11.693s 00:05:54.550 18:11:07 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.550 18:11:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.550 ************************************ 00:05:54.550 END TEST event 00:05:54.550 ************************************ 00:05:54.550 18:11:07 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:05:54.550 18:11:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.550 18:11:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.550 18:11:07 -- common/autotest_common.sh@10 -- # set +x 00:05:54.550 ************************************ 00:05:54.550 START TEST thread 00:05:54.550 ************************************ 00:05:54.550 18:11:07 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:05:54.550 * Looking for test storage... 00:05:54.550 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:05:54.550 18:11:07 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:54.550 18:11:07 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:05:54.550 18:11:07 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:54.809 18:11:07 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:54.809 18:11:07 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.809 18:11:07 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.810 18:11:07 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.810 18:11:07 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.810 18:11:07 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.810 18:11:07 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.810 18:11:07 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.810 18:11:07 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.810 18:11:07 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.810 18:11:07 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.810 18:11:07 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.810 18:11:07 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:54.810 18:11:07 thread -- scripts/common.sh@345 -- # : 1 00:05:54.810 18:11:07 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.810 18:11:07 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.810 18:11:07 thread -- scripts/common.sh@365 -- # decimal 1 00:05:54.810 18:11:07 thread -- scripts/common.sh@353 -- # local d=1 00:05:54.810 18:11:07 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.810 18:11:07 thread -- scripts/common.sh@355 -- # echo 1 00:05:54.810 18:11:07 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.810 18:11:07 thread -- scripts/common.sh@366 -- # decimal 2 00:05:54.810 18:11:07 thread -- scripts/common.sh@353 -- # local d=2 00:05:54.810 18:11:07 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.810 18:11:07 thread -- scripts/common.sh@355 -- # echo 2 00:05:54.810 18:11:07 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.810 18:11:07 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.810 18:11:07 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.810 18:11:07 thread -- scripts/common.sh@368 -- # return 0 00:05:54.810 18:11:07 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.810 18:11:07 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:54.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.810 --rc genhtml_branch_coverage=1 00:05:54.810 --rc genhtml_function_coverage=1 00:05:54.810 --rc genhtml_legend=1 00:05:54.810 --rc geninfo_all_blocks=1 00:05:54.810 --rc geninfo_unexecuted_blocks=1 00:05:54.810 00:05:54.810 ' 00:05:54.810 18:11:07 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:54.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.810 --rc genhtml_branch_coverage=1 00:05:54.810 --rc genhtml_function_coverage=1 00:05:54.810 --rc genhtml_legend=1 00:05:54.810 --rc geninfo_all_blocks=1 00:05:54.810 --rc geninfo_unexecuted_blocks=1 00:05:54.810 00:05:54.810 ' 00:05:54.810 18:11:07 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:54.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.810 --rc genhtml_branch_coverage=1 00:05:54.810 --rc genhtml_function_coverage=1 00:05:54.810 --rc genhtml_legend=1 00:05:54.810 --rc geninfo_all_blocks=1 00:05:54.810 --rc geninfo_unexecuted_blocks=1 00:05:54.810 00:05:54.810 ' 00:05:54.810 18:11:07 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:54.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.810 --rc genhtml_branch_coverage=1 00:05:54.810 --rc genhtml_function_coverage=1 00:05:54.810 --rc genhtml_legend=1 00:05:54.810 --rc geninfo_all_blocks=1 00:05:54.810 --rc geninfo_unexecuted_blocks=1 00:05:54.810 00:05:54.810 ' 00:05:54.810 18:11:07 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:54.810 18:11:07 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:54.810 18:11:07 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.810 18:11:07 thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.810 ************************************ 00:05:54.810 START TEST thread_poller_perf 00:05:54.810 ************************************ 00:05:54.810 18:11:07 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:54.810 [2024-10-08 18:11:07.823651] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:05:54.810 [2024-10-08 18:11:07.823730] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3279512 ] 00:05:54.810 [2024-10-08 18:11:07.914455] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.069 [2024-10-08 18:11:08.006930] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.069 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:56.007 [2024-10-08T16:11:09.180Z] ====================================== 00:05:56.007 [2024-10-08T16:11:09.180Z] busy:2310326040 (cyc) 00:05:56.007 [2024-10-08T16:11:09.180Z] total_run_count: 412000 00:05:56.007 [2024-10-08T16:11:09.180Z] tsc_hz: 2300000000 (cyc) 00:05:56.007 [2024-10-08T16:11:09.180Z] ====================================== 00:05:56.007 [2024-10-08T16:11:09.180Z] poller_cost: 5607 (cyc), 2437 (nsec) 00:05:56.007 00:05:56.007 real 0m1.295s 00:05:56.007 user 0m1.181s 00:05:56.007 sys 0m0.109s 00:05:56.007 18:11:09 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.007 18:11:09 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:56.007 ************************************ 00:05:56.007 END TEST thread_poller_perf 00:05:56.007 ************************************ 00:05:56.007 18:11:09 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:56.007 18:11:09 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:56.007 18:11:09 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.007 18:11:09 thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.007 ************************************ 00:05:56.007 START TEST thread_poller_perf 00:05:56.007 ************************************ 00:05:56.007 18:11:09 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:56.266 [2024-10-08 18:11:09.202501] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:05:56.266 [2024-10-08 18:11:09.202568] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3279720 ] 00:05:56.266 [2024-10-08 18:11:09.289611] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.266 [2024-10-08 18:11:09.374632] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.266 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:57.646 [2024-10-08T16:11:10.819Z] ====================================== 00:05:57.646 [2024-10-08T16:11:10.819Z] busy:2301778108 (cyc) 00:05:57.646 [2024-10-08T16:11:10.819Z] total_run_count: 5459000 00:05:57.646 [2024-10-08T16:11:10.819Z] tsc_hz: 2300000000 (cyc) 00:05:57.646 [2024-10-08T16:11:10.819Z] ====================================== 00:05:57.646 [2024-10-08T16:11:10.819Z] poller_cost: 421 (cyc), 183 (nsec) 00:05:57.646 00:05:57.646 real 0m1.278s 00:05:57.646 user 0m1.168s 00:05:57.646 sys 0m0.105s 00:05:57.646 18:11:10 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.646 18:11:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:57.646 ************************************ 00:05:57.646 END TEST thread_poller_perf 00:05:57.646 ************************************ 00:05:57.646 18:11:10 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:57.646 00:05:57.646 real 0m2.942s 00:05:57.646 user 0m2.524s 00:05:57.646 sys 0m0.439s 00:05:57.646 18:11:10 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.646 18:11:10 thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.646 ************************************ 00:05:57.646 END TEST thread 00:05:57.646 ************************************ 00:05:57.646 18:11:10 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:57.646 18:11:10 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:05:57.646 18:11:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.646 18:11:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.646 18:11:10 -- common/autotest_common.sh@10 -- # set +x 00:05:57.646 ************************************ 00:05:57.646 START TEST app_cmdline 00:05:57.646 ************************************ 00:05:57.646 18:11:10 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:05:57.646 * Looking for test storage... 00:05:57.646 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:05:57.646 18:11:10 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:57.646 18:11:10 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:05:57.646 18:11:10 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:57.646 18:11:10 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:57.646 18:11:10 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.646 18:11:10 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.646 18:11:10 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.646 18:11:10 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.646 18:11:10 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.646 18:11:10 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.646 18:11:10 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.646 18:11:10 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.646 18:11:10 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.646 18:11:10 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.646 18:11:10 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.646 18:11:10 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:57.646 18:11:10 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:57.646 18:11:10 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.646 18:11:10 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.646 18:11:10 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:57.646 18:11:10 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:57.646 18:11:10 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.646 18:11:10 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:57.646 18:11:10 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.646 18:11:10 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:57.646 18:11:10 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:57.646 18:11:10 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.646 18:11:10 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:57.646 18:11:10 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.646 18:11:10 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.646 18:11:10 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.646 18:11:10 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:57.647 18:11:10 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.647 18:11:10 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:57.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.647 --rc genhtml_branch_coverage=1 00:05:57.647 --rc genhtml_function_coverage=1 00:05:57.647 --rc genhtml_legend=1 00:05:57.647 --rc geninfo_all_blocks=1 00:05:57.647 --rc geninfo_unexecuted_blocks=1 00:05:57.647 00:05:57.647 ' 00:05:57.647 18:11:10 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:57.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.647 --rc genhtml_branch_coverage=1 00:05:57.647 --rc genhtml_function_coverage=1 00:05:57.647 --rc genhtml_legend=1 00:05:57.647 --rc geninfo_all_blocks=1 00:05:57.647 --rc geninfo_unexecuted_blocks=1 00:05:57.647 00:05:57.647 ' 00:05:57.647 18:11:10 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:57.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.647 --rc genhtml_branch_coverage=1 00:05:57.647 --rc genhtml_function_coverage=1 00:05:57.647 --rc genhtml_legend=1 00:05:57.647 --rc geninfo_all_blocks=1 00:05:57.647 --rc geninfo_unexecuted_blocks=1 00:05:57.647 00:05:57.647 ' 00:05:57.647 18:11:10 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:57.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.647 --rc genhtml_branch_coverage=1 00:05:57.647 --rc genhtml_function_coverage=1 00:05:57.647 --rc genhtml_legend=1 00:05:57.647 --rc geninfo_all_blocks=1 00:05:57.647 --rc geninfo_unexecuted_blocks=1 00:05:57.647 00:05:57.647 ' 00:05:57.647 18:11:10 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:57.647 18:11:10 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3280075 00:05:57.647 18:11:10 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:57.647 18:11:10 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3280075 00:05:57.647 18:11:10 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 3280075 ']' 00:05:57.647 18:11:10 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.647 18:11:10 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:57.647 18:11:10 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.647 18:11:10 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:57.647 18:11:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:57.906 [2024-10-08 18:11:10.846475] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:05:57.906 [2024-10-08 18:11:10.846538] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3280075 ] 00:05:57.906 [2024-10-08 18:11:10.932164] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.906 [2024-10-08 18:11:11.023215] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.845 18:11:11 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:58.845 18:11:11 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:05:58.845 18:11:11 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:58.845 { 00:05:58.845 "version": "SPDK v25.01-pre git sha1 8ce2f3c7d", 00:05:58.845 "fields": { 00:05:58.845 "major": 25, 00:05:58.845 "minor": 1, 00:05:58.845 "patch": 0, 00:05:58.845 "suffix": "-pre", 00:05:58.845 "commit": "8ce2f3c7d" 00:05:58.845 } 00:05:58.845 } 00:05:58.845 18:11:11 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:58.845 18:11:11 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:58.845 18:11:11 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:58.845 18:11:11 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:58.846 18:11:11 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:58.846 18:11:11 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.846 18:11:11 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:58.846 18:11:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:58.846 18:11:11 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:58.846 18:11:11 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.846 18:11:11 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:58.846 18:11:11 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:58.846 18:11:11 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:58.846 18:11:11 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:58.846 18:11:11 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:58.846 18:11:11 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:58.846 18:11:11 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.846 18:11:11 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:58.846 18:11:11 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.846 18:11:11 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:58.846 18:11:11 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.846 18:11:11 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:58.846 18:11:11 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:05:58.846 18:11:11 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:59.105 request: 00:05:59.105 { 00:05:59.105 "method": "env_dpdk_get_mem_stats", 00:05:59.105 "req_id": 1 00:05:59.105 } 00:05:59.105 Got JSON-RPC error response 00:05:59.105 response: 00:05:59.105 { 00:05:59.105 "code": -32601, 00:05:59.105 "message": "Method not found" 00:05:59.105 } 00:05:59.105 18:11:12 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:59.105 18:11:12 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:59.105 18:11:12 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:59.105 18:11:12 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:59.105 18:11:12 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3280075 00:05:59.105 18:11:12 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 3280075 ']' 00:05:59.105 18:11:12 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 3280075 00:05:59.105 18:11:12 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:05:59.105 18:11:12 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:59.105 18:11:12 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3280075 00:05:59.105 18:11:12 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:59.105 18:11:12 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:59.106 18:11:12 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3280075' 00:05:59.106 killing process with pid 3280075 00:05:59.106 18:11:12 app_cmdline -- common/autotest_common.sh@969 -- # kill 3280075 00:05:59.106 18:11:12 app_cmdline -- common/autotest_common.sh@974 -- # wait 3280075 00:05:59.675 00:05:59.675 real 0m1.959s 00:05:59.675 user 0m2.226s 00:05:59.675 sys 0m0.611s 00:05:59.675 18:11:12 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.675 18:11:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:59.675 ************************************ 00:05:59.675 END TEST app_cmdline 00:05:59.675 ************************************ 00:05:59.675 18:11:12 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:05:59.675 18:11:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.675 18:11:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.675 18:11:12 -- common/autotest_common.sh@10 -- # set +x 00:05:59.675 ************************************ 00:05:59.675 START TEST version 00:05:59.675 ************************************ 00:05:59.675 18:11:12 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:05:59.675 * Looking for test storage... 00:05:59.675 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:05:59.675 18:11:12 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:59.675 18:11:12 version -- common/autotest_common.sh@1681 -- # lcov --version 00:05:59.675 18:11:12 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:59.675 18:11:12 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:59.675 18:11:12 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.675 18:11:12 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.675 18:11:12 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.675 18:11:12 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.675 18:11:12 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.675 18:11:12 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.675 18:11:12 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.675 18:11:12 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.675 18:11:12 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.675 18:11:12 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.675 18:11:12 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.675 18:11:12 version -- scripts/common.sh@344 -- # case "$op" in 00:05:59.675 18:11:12 version -- scripts/common.sh@345 -- # : 1 00:05:59.675 18:11:12 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.675 18:11:12 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.675 18:11:12 version -- scripts/common.sh@365 -- # decimal 1 00:05:59.675 18:11:12 version -- scripts/common.sh@353 -- # local d=1 00:05:59.675 18:11:12 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.675 18:11:12 version -- scripts/common.sh@355 -- # echo 1 00:05:59.675 18:11:12 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.675 18:11:12 version -- scripts/common.sh@366 -- # decimal 2 00:05:59.675 18:11:12 version -- scripts/common.sh@353 -- # local d=2 00:05:59.675 18:11:12 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.675 18:11:12 version -- scripts/common.sh@355 -- # echo 2 00:05:59.675 18:11:12 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.675 18:11:12 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.675 18:11:12 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.675 18:11:12 version -- scripts/common.sh@368 -- # return 0 00:05:59.675 18:11:12 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.675 18:11:12 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:59.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.675 --rc genhtml_branch_coverage=1 00:05:59.675 --rc genhtml_function_coverage=1 00:05:59.675 --rc genhtml_legend=1 00:05:59.675 --rc geninfo_all_blocks=1 00:05:59.675 --rc geninfo_unexecuted_blocks=1 00:05:59.675 00:05:59.675 ' 00:05:59.675 18:11:12 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:59.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.675 --rc genhtml_branch_coverage=1 00:05:59.675 --rc genhtml_function_coverage=1 00:05:59.675 --rc genhtml_legend=1 00:05:59.675 --rc geninfo_all_blocks=1 00:05:59.675 --rc geninfo_unexecuted_blocks=1 00:05:59.675 00:05:59.675 ' 00:05:59.675 18:11:12 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:59.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.675 --rc genhtml_branch_coverage=1 00:05:59.675 --rc genhtml_function_coverage=1 00:05:59.675 --rc genhtml_legend=1 00:05:59.675 --rc geninfo_all_blocks=1 00:05:59.675 --rc geninfo_unexecuted_blocks=1 00:05:59.675 00:05:59.675 ' 00:05:59.675 18:11:12 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:59.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.675 --rc genhtml_branch_coverage=1 00:05:59.675 --rc genhtml_function_coverage=1 00:05:59.675 --rc genhtml_legend=1 00:05:59.675 --rc geninfo_all_blocks=1 00:05:59.675 --rc geninfo_unexecuted_blocks=1 00:05:59.675 00:05:59.675 ' 00:05:59.675 18:11:12 version -- app/version.sh@17 -- # get_header_version major 00:05:59.675 18:11:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:05:59.675 18:11:12 version -- app/version.sh@14 -- # cut -f2 00:05:59.675 18:11:12 version -- app/version.sh@14 -- # tr -d '"' 00:05:59.675 18:11:12 version -- app/version.sh@17 -- # major=25 00:05:59.675 18:11:12 version -- app/version.sh@18 -- # get_header_version minor 00:05:59.675 18:11:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:05:59.676 18:11:12 version -- app/version.sh@14 -- # cut -f2 00:05:59.676 18:11:12 version -- app/version.sh@14 -- # tr -d '"' 00:05:59.676 18:11:12 version -- app/version.sh@18 -- # minor=1 00:05:59.676 18:11:12 version -- app/version.sh@19 -- # get_header_version patch 00:05:59.935 18:11:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:05:59.935 18:11:12 version -- app/version.sh@14 -- # cut -f2 00:05:59.935 18:11:12 version -- app/version.sh@14 -- # tr -d '"' 00:05:59.935 18:11:12 version -- app/version.sh@19 -- # patch=0 00:05:59.935 18:11:12 version -- app/version.sh@20 -- # get_header_version suffix 00:05:59.935 18:11:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:05:59.935 18:11:12 version -- app/version.sh@14 -- # cut -f2 00:05:59.935 18:11:12 version -- app/version.sh@14 -- # tr -d '"' 00:05:59.935 18:11:12 version -- app/version.sh@20 -- # suffix=-pre 00:05:59.935 18:11:12 version -- app/version.sh@22 -- # version=25.1 00:05:59.935 18:11:12 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:59.935 18:11:12 version -- app/version.sh@28 -- # version=25.1rc0 00:05:59.936 18:11:12 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:05:59.936 18:11:12 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:59.936 18:11:12 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:59.936 18:11:12 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:59.936 00:05:59.936 real 0m0.277s 00:05:59.936 user 0m0.155s 00:05:59.936 sys 0m0.178s 00:05:59.936 18:11:12 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.936 18:11:12 version -- common/autotest_common.sh@10 -- # set +x 00:05:59.936 ************************************ 00:05:59.936 END TEST version 00:05:59.936 ************************************ 00:05:59.936 18:11:12 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:59.936 18:11:12 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:59.936 18:11:12 -- spdk/autotest.sh@194 -- # uname -s 00:05:59.936 18:11:12 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:59.936 18:11:12 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:59.936 18:11:12 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:59.936 18:11:12 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:59.936 18:11:12 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:05:59.936 18:11:12 -- spdk/autotest.sh@256 -- # timing_exit lib 00:05:59.936 18:11:12 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:59.936 18:11:12 -- common/autotest_common.sh@10 -- # set +x 00:05:59.936 18:11:13 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:05:59.936 18:11:13 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:05:59.936 18:11:13 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:05:59.936 18:11:13 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:05:59.936 18:11:13 -- spdk/autotest.sh@276 -- # '[' rdma = rdma ']' 00:05:59.936 18:11:13 -- spdk/autotest.sh@277 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:05:59.936 18:11:13 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:59.936 18:11:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.936 18:11:13 -- common/autotest_common.sh@10 -- # set +x 00:05:59.936 ************************************ 00:05:59.936 START TEST nvmf_rdma 00:05:59.936 ************************************ 00:05:59.936 18:11:13 nvmf_rdma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:06:00.196 * Looking for test storage... 00:06:00.196 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:06:00.196 18:11:13 nvmf_rdma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:00.196 18:11:13 nvmf_rdma -- common/autotest_common.sh@1681 -- # lcov --version 00:06:00.196 18:11:13 nvmf_rdma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:00.196 18:11:13 nvmf_rdma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:00.196 18:11:13 nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.196 18:11:13 nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.196 18:11:13 nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.196 18:11:13 nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.196 18:11:13 nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.196 18:11:13 nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.196 18:11:13 nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.196 18:11:13 nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.196 18:11:13 nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.196 18:11:13 nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.196 18:11:13 nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.196 18:11:13 nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:06:00.196 18:11:13 nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:06:00.196 18:11:13 nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.196 18:11:13 nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.196 18:11:13 nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:06:00.196 18:11:13 nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:06:00.196 18:11:13 nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.196 18:11:13 nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:06:00.196 18:11:13 nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.196 18:11:13 nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:06:00.196 18:11:13 nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:06:00.196 18:11:13 nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.196 18:11:13 nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:06:00.196 18:11:13 nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.196 18:11:13 nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.196 18:11:13 nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.196 18:11:13 nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:06:00.196 18:11:13 nvmf_rdma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.196 18:11:13 nvmf_rdma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:00.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.196 --rc genhtml_branch_coverage=1 00:06:00.196 --rc genhtml_function_coverage=1 00:06:00.196 --rc genhtml_legend=1 00:06:00.196 --rc geninfo_all_blocks=1 00:06:00.196 --rc geninfo_unexecuted_blocks=1 00:06:00.196 00:06:00.196 ' 00:06:00.196 18:11:13 nvmf_rdma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:00.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.196 --rc genhtml_branch_coverage=1 00:06:00.196 --rc genhtml_function_coverage=1 00:06:00.196 --rc genhtml_legend=1 00:06:00.196 --rc geninfo_all_blocks=1 00:06:00.196 --rc geninfo_unexecuted_blocks=1 00:06:00.196 00:06:00.196 ' 00:06:00.196 18:11:13 nvmf_rdma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:00.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.196 --rc genhtml_branch_coverage=1 00:06:00.196 --rc genhtml_function_coverage=1 00:06:00.196 --rc genhtml_legend=1 00:06:00.196 --rc geninfo_all_blocks=1 00:06:00.196 --rc geninfo_unexecuted_blocks=1 00:06:00.196 00:06:00.196 ' 00:06:00.196 18:11:13 nvmf_rdma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:00.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.196 --rc genhtml_branch_coverage=1 00:06:00.196 --rc genhtml_function_coverage=1 00:06:00.196 --rc genhtml_legend=1 00:06:00.196 --rc geninfo_all_blocks=1 00:06:00.196 --rc geninfo_unexecuted_blocks=1 00:06:00.196 00:06:00.196 ' 00:06:00.196 18:11:13 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:06:00.196 18:11:13 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:00.196 18:11:13 nvmf_rdma -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:06:00.196 18:11:13 nvmf_rdma -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:00.196 18:11:13 nvmf_rdma -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.196 18:11:13 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:06:00.196 ************************************ 00:06:00.196 START TEST nvmf_target_core 00:06:00.196 ************************************ 00:06:00.196 18:11:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:06:00.457 * Looking for test storage... 00:06:00.457 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:00.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.457 --rc genhtml_branch_coverage=1 00:06:00.457 --rc genhtml_function_coverage=1 00:06:00.457 --rc genhtml_legend=1 00:06:00.457 --rc geninfo_all_blocks=1 00:06:00.457 --rc geninfo_unexecuted_blocks=1 00:06:00.457 00:06:00.457 ' 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:00.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.457 --rc genhtml_branch_coverage=1 00:06:00.457 --rc genhtml_function_coverage=1 00:06:00.457 --rc genhtml_legend=1 00:06:00.457 --rc geninfo_all_blocks=1 00:06:00.457 --rc geninfo_unexecuted_blocks=1 00:06:00.457 00:06:00.457 ' 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:00.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.457 --rc genhtml_branch_coverage=1 00:06:00.457 --rc genhtml_function_coverage=1 00:06:00.457 --rc genhtml_legend=1 00:06:00.457 --rc geninfo_all_blocks=1 00:06:00.457 --rc geninfo_unexecuted_blocks=1 00:06:00.457 00:06:00.457 ' 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:00.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.457 --rc genhtml_branch_coverage=1 00:06:00.457 --rc genhtml_function_coverage=1 00:06:00.457 --rc genhtml_legend=1 00:06:00.457 --rc geninfo_all_blocks=1 00:06:00.457 --rc geninfo_unexecuted_blocks=1 00:06:00.457 00:06:00.457 ' 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:00.457 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:00.457 18:11:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.458 18:11:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:00.458 ************************************ 00:06:00.458 START TEST nvmf_abort 00:06:00.458 ************************************ 00:06:00.458 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:06:00.718 * Looking for test storage... 00:06:00.718 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:00.718 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:00.718 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:06:00.718 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:00.718 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:00.718 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.718 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.718 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.718 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.718 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.718 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.718 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.718 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.718 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.718 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.718 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.718 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:00.718 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:00.718 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.718 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.718 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:00.718 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:00.718 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.718 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:00.718 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.718 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:00.718 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:00.718 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.718 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:00.718 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.718 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.718 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.718 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:00.718 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.718 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:00.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.718 --rc genhtml_branch_coverage=1 00:06:00.718 --rc genhtml_function_coverage=1 00:06:00.718 --rc genhtml_legend=1 00:06:00.718 --rc geninfo_all_blocks=1 00:06:00.718 --rc geninfo_unexecuted_blocks=1 00:06:00.718 00:06:00.718 ' 00:06:00.718 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:00.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.718 --rc genhtml_branch_coverage=1 00:06:00.718 --rc genhtml_function_coverage=1 00:06:00.718 --rc genhtml_legend=1 00:06:00.718 --rc geninfo_all_blocks=1 00:06:00.718 --rc geninfo_unexecuted_blocks=1 00:06:00.718 00:06:00.718 ' 00:06:00.718 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:00.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.718 --rc genhtml_branch_coverage=1 00:06:00.718 --rc genhtml_function_coverage=1 00:06:00.718 --rc genhtml_legend=1 00:06:00.718 --rc geninfo_all_blocks=1 00:06:00.718 --rc geninfo_unexecuted_blocks=1 00:06:00.718 00:06:00.718 ' 00:06:00.718 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:00.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.718 --rc genhtml_branch_coverage=1 00:06:00.718 --rc genhtml_function_coverage=1 00:06:00.718 --rc genhtml_legend=1 00:06:00.718 --rc geninfo_all_blocks=1 00:06:00.718 --rc geninfo_unexecuted_blocks=1 00:06:00.718 00:06:00.719 ' 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:00.719 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:00.719 18:11:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:07.294 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:07.294 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:07.294 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:07.294 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:07.294 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:07.294 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:07.294 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:07.294 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:07.294 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:07.294 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:07.294 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:07.294 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:07.294 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:07.294 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:07.294 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:07.294 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:07.294 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:07.294 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:07.294 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:07.294 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:07.294 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:07.294 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:06:07.295 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:06:07.295 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:06:07.295 Found net devices under 0000:18:00.0: mlx_0_0 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:06:07.295 Found net devices under 0000:18:00.1: mlx_0_1 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # rdma_device_init 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # uname 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@66 -- # modprobe ib_cm 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@67 -- # modprobe ib_core 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@68 -- # modprobe ib_umad 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@70 -- # modprobe iw_cm 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@528 -- # allocate_nic_ips 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # get_rdma_if_list 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:06:07.295 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:07.295 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:06:07.295 altname enp24s0f0np0 00:06:07.295 altname ens785f0np0 00:06:07.295 inet 192.168.100.8/24 scope global mlx_0_0 00:06:07.295 valid_lft forever preferred_lft forever 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:06:07.295 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:07.295 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:06:07.295 altname enp24s0f1np1 00:06:07.295 altname ens785f1np1 00:06:07.295 inet 192.168.100.9/24 scope global mlx_0_1 00:06:07.295 valid_lft forever preferred_lft forever 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # get_rdma_if_list 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:07.295 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:07.296 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:07.296 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:07.296 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:06:07.296 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:07.296 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:07.296 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:07.296 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:07.296 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:07.296 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:07.296 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:06:07.296 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:07.296 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:06:07.296 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:07.296 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:07.296 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:07.296 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:07.296 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:07.296 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:06:07.296 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:07.296 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:07.296 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:07.296 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:07.296 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:06:07.296 192.168.100.9' 00:06:07.296 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:06:07.296 192.168.100.9' 00:06:07.296 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # head -n 1 00:06:07.296 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:07.296 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:06:07.296 192.168.100.9' 00:06:07.296 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # tail -n +2 00:06:07.296 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # head -n 1 00:06:07.296 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:07.296 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:06:07.296 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:07.296 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:06:07.296 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:06:07.296 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:06:07.296 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:07.296 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:07.296 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:07.296 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:07.556 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=3283497 00:06:07.556 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:07.556 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 3283497 00:06:07.556 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 3283497 ']' 00:06:07.556 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.556 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.556 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.556 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.556 18:11:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:07.556 [2024-10-08 18:11:20.524272] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:06:07.556 [2024-10-08 18:11:20.524342] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:07.556 [2024-10-08 18:11:20.610565] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:07.556 [2024-10-08 18:11:20.702400] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:07.556 [2024-10-08 18:11:20.702446] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:07.556 [2024-10-08 18:11:20.702456] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:07.556 [2024-10-08 18:11:20.702465] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:07.556 [2024-10-08 18:11:20.702472] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:07.556 [2024-10-08 18:11:20.703314] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.556 [2024-10-08 18:11:20.703417] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.556 [2024-10-08 18:11:20.703418] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:06:08.561 18:11:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.561 18:11:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:06:08.561 18:11:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:08.561 18:11:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:08.561 18:11:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:08.561 18:11:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:08.561 18:11:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:06:08.561 18:11:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.561 18:11:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:08.561 [2024-10-08 18:11:21.479367] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e94ab0/0x1e98fa0) succeed. 00:06:08.561 [2024-10-08 18:11:21.498667] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e96050/0x1eda640) succeed. 00:06:08.561 18:11:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.561 18:11:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:08.561 18:11:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.561 18:11:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:08.561 Malloc0 00:06:08.561 18:11:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.561 18:11:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:08.561 18:11:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.561 18:11:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:08.561 Delay0 00:06:08.561 18:11:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.561 18:11:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:08.561 18:11:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.561 18:11:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:08.561 18:11:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.561 18:11:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:08.561 18:11:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.561 18:11:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:08.561 18:11:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.561 18:11:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:06:08.561 18:11:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.561 18:11:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:08.561 [2024-10-08 18:11:21.664836] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:08.561 18:11:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.561 18:11:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:06:08.561 18:11:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.561 18:11:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:08.561 18:11:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.561 18:11:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:08.820 [2024-10-08 18:11:21.784153] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:10.727 Initializing NVMe Controllers 00:06:10.727 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:06:10.727 controller IO queue size 128 less than required 00:06:10.727 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:10.727 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:10.727 Initialization complete. Launching workers. 00:06:10.727 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 41945 00:06:10.727 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 42006, failed to submit 62 00:06:10.727 success 41946, unsuccessful 60, failed 0 00:06:10.727 18:11:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:10.727 18:11:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.727 18:11:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.986 18:11:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.986 18:11:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:10.987 18:11:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:10.987 18:11:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:10.987 18:11:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:10.987 18:11:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:06:10.987 18:11:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:06:10.987 18:11:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:10.987 18:11:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:10.987 18:11:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:06:10.987 rmmod nvme_rdma 00:06:10.987 rmmod nvme_fabrics 00:06:10.987 18:11:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:10.987 18:11:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:10.987 18:11:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:10.987 18:11:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 3283497 ']' 00:06:10.987 18:11:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 3283497 00:06:10.987 18:11:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 3283497 ']' 00:06:10.987 18:11:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 3283497 00:06:10.987 18:11:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:06:10.987 18:11:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:10.987 18:11:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3283497 00:06:10.987 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:10.987 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:10.987 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3283497' 00:06:10.987 killing process with pid 3283497 00:06:10.987 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 3283497 00:06:10.987 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 3283497 00:06:11.246 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:11.246 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:06:11.246 00:06:11.246 real 0m10.773s 00:06:11.246 user 0m14.951s 00:06:11.246 sys 0m5.625s 00:06:11.246 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.246 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:11.246 ************************************ 00:06:11.246 END TEST nvmf_abort 00:06:11.246 ************************************ 00:06:11.246 18:11:24 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:06:11.247 18:11:24 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:11.247 18:11:24 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.247 18:11:24 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:11.247 ************************************ 00:06:11.247 START TEST nvmf_ns_hotplug_stress 00:06:11.247 ************************************ 00:06:11.247 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:06:11.506 * Looking for test storage... 00:06:11.506 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:11.506 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:11.506 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:11.506 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:06:11.506 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:11.506 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.506 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.506 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.506 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.506 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.506 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.506 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.506 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.506 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.506 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.506 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.506 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:11.506 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:11.506 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.506 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.506 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:11.506 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:11.506 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.506 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:11.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.507 --rc genhtml_branch_coverage=1 00:06:11.507 --rc genhtml_function_coverage=1 00:06:11.507 --rc genhtml_legend=1 00:06:11.507 --rc geninfo_all_blocks=1 00:06:11.507 --rc geninfo_unexecuted_blocks=1 00:06:11.507 00:06:11.507 ' 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:11.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.507 --rc genhtml_branch_coverage=1 00:06:11.507 --rc genhtml_function_coverage=1 00:06:11.507 --rc genhtml_legend=1 00:06:11.507 --rc geninfo_all_blocks=1 00:06:11.507 --rc geninfo_unexecuted_blocks=1 00:06:11.507 00:06:11.507 ' 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:11.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.507 --rc genhtml_branch_coverage=1 00:06:11.507 --rc genhtml_function_coverage=1 00:06:11.507 --rc genhtml_legend=1 00:06:11.507 --rc geninfo_all_blocks=1 00:06:11.507 --rc geninfo_unexecuted_blocks=1 00:06:11.507 00:06:11.507 ' 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:11.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.507 --rc genhtml_branch_coverage=1 00:06:11.507 --rc genhtml_function_coverage=1 00:06:11.507 --rc genhtml_legend=1 00:06:11.507 --rc geninfo_all_blocks=1 00:06:11.507 --rc geninfo_unexecuted_blocks=1 00:06:11.507 00:06:11.507 ' 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:11.507 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:11.507 18:11:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:06:19.637 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:06:19.637 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:19.637 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:06:19.638 Found net devices under 0000:18:00.0: mlx_0_0 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:06:19.638 Found net devices under 0000:18:00.1: mlx_0_1 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # rdma_device_init 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # uname 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@528 -- # allocate_nic_ips 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:06:19.638 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:19.638 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:06:19.638 altname enp24s0f0np0 00:06:19.638 altname ens785f0np0 00:06:19.638 inet 192.168.100.8/24 scope global mlx_0_0 00:06:19.638 valid_lft forever preferred_lft forever 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:06:19.638 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:19.638 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:06:19.638 altname enp24s0f1np1 00:06:19.638 altname ens785f1np1 00:06:19.638 inet 192.168.100.9/24 scope global mlx_0_1 00:06:19.638 valid_lft forever preferred_lft forever 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:06:19.638 192.168.100.9' 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:06:19.638 192.168.100.9' 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # head -n 1 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:19.638 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:06:19.638 192.168.100.9' 00:06:19.639 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # tail -n +2 00:06:19.639 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # head -n 1 00:06:19.639 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:19.639 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:06:19.639 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:19.639 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:06:19.639 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:06:19.639 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:06:19.639 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:19.639 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:19.639 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:19.639 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:19.639 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=3286978 00:06:19.639 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:19.639 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 3286978 00:06:19.639 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 3286978 ']' 00:06:19.639 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.639 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:19.639 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.639 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:19.639 18:11:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:19.639 [2024-10-08 18:11:31.665739] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:06:19.639 [2024-10-08 18:11:31.665799] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:19.639 [2024-10-08 18:11:31.751573] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:19.639 [2024-10-08 18:11:31.843989] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:19.639 [2024-10-08 18:11:31.844035] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:19.639 [2024-10-08 18:11:31.844045] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:19.639 [2024-10-08 18:11:31.844053] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:19.639 [2024-10-08 18:11:31.844060] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:19.639 [2024-10-08 18:11:31.844981] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.639 [2024-10-08 18:11:31.844881] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.639 [2024-10-08 18:11:31.844982] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:06:19.639 18:11:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.639 18:11:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:06:19.639 18:11:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:19.639 18:11:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:19.639 18:11:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:19.639 18:11:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:19.639 18:11:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:19.639 18:11:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:06:19.639 [2024-10-08 18:11:32.787906] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1846ab0/0x184afa0) succeed. 00:06:19.639 [2024-10-08 18:11:32.798922] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1848050/0x188c640) succeed. 00:06:19.898 18:11:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:20.157 18:11:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:20.157 [2024-10-08 18:11:33.311280] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:20.416 18:11:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:06:20.416 18:11:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:20.674 Malloc0 00:06:20.674 18:11:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:20.932 Delay0 00:06:20.932 18:11:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.190 18:11:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:21.190 NULL1 00:06:21.190 18:11:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:21.450 18:11:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:21.450 18:11:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3287446 00:06:21.450 18:11:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3287446 00:06:21.450 18:11:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.829 Read completed with error (sct=0, sc=11) 00:06:22.829 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.829 18:11:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.829 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.829 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.829 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.829 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.829 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.829 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.829 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.829 18:11:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:22.829 18:11:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:23.088 true 00:06:23.088 18:11:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3287446 00:06:23.088 18:11:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.024 18:11:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.024 18:11:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:24.024 18:11:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:24.284 true 00:06:24.284 18:11:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3287446 00:06:24.284 18:11:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.222 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.222 18:11:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.222 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.222 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.222 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.222 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.222 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.222 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.222 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.222 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.481 18:11:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:25.481 18:11:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:25.481 true 00:06:25.481 18:11:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3287446 00:06:25.481 18:11:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:26.423 18:11:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:26.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:26.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:26.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:26.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:26.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:26.682 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:26.682 18:11:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:26.682 18:11:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:26.682 true 00:06:26.682 18:11:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3287446 00:06:26.682 18:11:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.619 18:11:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.879 18:11:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:27.879 18:11:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:27.879 true 00:06:28.138 18:11:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3287446 00:06:28.138 18:11:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.076 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.076 18:11:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.076 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.076 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.077 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.077 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.077 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.077 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.077 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.077 18:11:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:29.077 18:11:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:29.336 true 00:06:29.336 18:11:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3287446 00:06:29.336 18:11:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.274 18:11:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.274 18:11:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:30.274 18:11:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:30.533 true 00:06:30.533 18:11:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3287446 00:06:30.533 18:11:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.471 18:11:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.471 18:11:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:31.471 18:11:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:31.747 true 00:06:31.747 18:11:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3287446 00:06:31.747 18:11:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.685 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.686 18:11:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.686 18:11:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:32.686 18:11:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:32.945 true 00:06:32.945 18:11:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3287446 00:06:32.945 18:11:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.883 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.883 18:11:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.883 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.883 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.883 18:11:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:33.883 18:11:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:34.142 true 00:06:34.142 18:11:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3287446 00:06:34.142 18:11:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.082 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:35.082 18:11:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.082 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:35.082 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:35.082 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:35.082 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:35.082 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:35.082 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:35.082 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:35.082 18:11:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:35.082 18:11:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:35.342 true 00:06:35.342 18:11:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3287446 00:06:35.342 18:11:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.279 18:11:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.279 18:11:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:36.279 18:11:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:36.538 true 00:06:36.538 18:11:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3287446 00:06:36.538 18:11:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.476 18:11:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.736 18:11:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:37.736 18:11:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:37.736 true 00:06:37.736 18:11:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3287446 00:06:37.736 18:11:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.674 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.674 18:11:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.674 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.674 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.674 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.674 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.674 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.674 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.674 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.933 18:11:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:38.933 18:11:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:38.933 true 00:06:38.933 18:11:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3287446 00:06:38.933 18:11:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:39.871 18:11:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:39.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.204 18:11:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:40.204 18:11:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:40.204 true 00:06:40.204 18:11:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3287446 00:06:40.204 18:11:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.142 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.142 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.143 18:11:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.143 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.143 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.143 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.143 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.143 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.143 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.143 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.143 18:11:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:41.143 18:11:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:41.401 true 00:06:41.401 18:11:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3287446 00:06:41.401 18:11:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.340 18:11:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.600 18:11:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:42.600 18:11:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:42.600 true 00:06:42.600 18:11:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3287446 00:06:42.600 18:11:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.538 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.538 18:11:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.538 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.538 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.538 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.538 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.538 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.538 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.538 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.796 18:11:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:43.796 18:11:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:43.796 true 00:06:43.796 18:11:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3287446 00:06:43.796 18:11:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.758 18:11:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:45.018 18:11:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:45.018 18:11:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:45.018 true 00:06:45.018 18:11:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3287446 00:06:45.018 18:11:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:45.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:45.957 18:11:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.217 18:11:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:46.217 18:11:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:46.217 true 00:06:46.217 18:11:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3287446 00:06:46.217 18:11:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.156 18:12:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.416 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.416 18:12:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:47.416 18:12:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:47.416 true 00:06:47.676 18:12:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3287446 00:06:47.676 18:12:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.246 18:12:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.505 18:12:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:48.505 18:12:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:48.764 true 00:06:48.764 18:12:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3287446 00:06:48.764 18:12:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.701 18:12:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.701 18:12:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:49.701 18:12:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:49.960 true 00:06:49.960 18:12:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3287446 00:06:49.960 18:12:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.899 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.899 18:12:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.899 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.899 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.899 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.899 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.899 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.899 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.899 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.899 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.158 18:12:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:51.158 18:12:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:51.158 true 00:06:51.158 18:12:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3287446 00:06:51.158 18:12:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.097 18:12:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.357 18:12:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:52.357 18:12:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:52.357 true 00:06:52.357 18:12:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3287446 00:06:52.357 18:12:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.618 18:12:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.876 18:12:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:52.876 18:12:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:53.135 true 00:06:53.135 18:12:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3287446 00:06:53.135 18:12:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.394 18:12:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.394 18:12:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:53.394 18:12:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:53.652 true 00:06:53.652 18:12:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3287446 00:06:53.652 18:12:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.911 18:12:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.911 Initializing NVMe Controllers 00:06:53.911 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:06:53.911 Controller IO queue size 128, less than required. 00:06:53.911 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:53.911 Controller IO queue size 128, less than required. 00:06:53.911 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:53.911 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:53.911 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:53.911 Initialization complete. Launching workers. 00:06:53.911 ======================================================== 00:06:53.911 Latency(us) 00:06:53.911 Device Information : IOPS MiB/s Average min max 00:06:53.911 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6224.07 3.04 18109.62 814.69 1138482.86 00:06:53.911 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 33312.30 16.27 3842.31 2262.29 291265.55 00:06:53.911 ======================================================== 00:06:53.911 Total : 39536.37 19.30 6088.36 814.69 1138482.86 00:06:53.911 00:06:54.170 18:12:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:54.170 18:12:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:54.429 true 00:06:54.429 18:12:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3287446 00:06:54.429 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3287446) - No such process 00:06:54.429 18:12:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3287446 00:06:54.429 18:12:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.429 18:12:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:54.688 18:12:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:54.688 18:12:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:54.688 18:12:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:54.688 18:12:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:54.688 18:12:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:54.951 null0 00:06:54.951 18:12:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:54.951 18:12:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:54.951 18:12:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:55.211 null1 00:06:55.211 18:12:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:55.211 18:12:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:55.211 18:12:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:55.468 null2 00:06:55.468 18:12:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:55.468 18:12:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:55.469 18:12:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:55.469 null3 00:06:55.469 18:12:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:55.469 18:12:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:55.469 18:12:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:55.785 null4 00:06:55.785 18:12:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:55.785 18:12:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:55.785 18:12:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:56.045 null5 00:06:56.045 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:56.045 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:56.045 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:56.045 null6 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:56.304 null7 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.304 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:56.305 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:56.305 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:56.305 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:56.305 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:56.305 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:56.305 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:56.305 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.305 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:56.305 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:56.305 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:56.305 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:56.305 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:56.305 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3292589 3292590 3292592 3292594 3292596 3292598 3292599 3292602 00:06:56.305 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:56.305 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:56.305 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.305 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:56.563 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:56.563 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.563 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:56.563 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:56.563 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:56.563 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:56.563 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:56.563 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:56.822 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.822 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.822 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:56.822 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.822 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.822 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:56.822 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.822 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.822 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:56.822 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.822 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.822 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:56.822 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.822 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.822 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:56.822 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.822 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.822 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:56.822 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.822 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.822 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:56.822 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.822 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.822 18:12:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:57.081 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.081 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:57.081 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:57.081 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:57.081 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:57.081 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:57.081 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:57.081 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:57.341 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.341 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.341 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:57.341 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.341 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.341 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:57.341 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.341 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.341 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:57.341 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.341 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.341 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:57.341 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.341 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.341 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.341 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:57.341 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.341 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:57.341 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.341 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.341 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:57.341 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.341 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.341 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:57.341 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:57.600 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:57.600 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:57.600 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:57.600 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:57.600 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:57.600 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.600 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:57.600 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.600 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.600 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:57.600 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.600 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.600 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:57.600 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.600 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.600 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:57.600 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.600 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.600 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.601 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:57.601 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.601 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:57.601 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.601 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.601 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:57.601 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.601 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.601 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:57.601 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.601 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.601 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:57.860 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:57.860 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:57.860 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:57.860 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:57.860 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:57.860 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.860 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:57.860 18:12:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:58.118 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.118 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.118 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:58.118 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.118 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.118 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:58.118 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.118 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.118 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:58.118 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.118 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.118 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.118 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:58.118 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.118 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:58.118 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.118 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.118 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:58.118 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.118 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.118 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:58.118 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.118 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.119 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:58.377 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:58.377 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:58.377 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:58.377 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:58.377 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:58.377 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:58.377 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:58.377 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.636 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.636 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.636 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:58.636 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.636 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.636 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:58.636 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.636 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.636 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:58.636 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.636 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.636 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.636 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.637 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:58.637 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:58.637 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.637 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.637 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:58.637 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.637 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.637 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:58.637 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.637 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.637 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:58.895 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:58.895 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:58.895 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.895 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:58.895 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:58.895 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:58.895 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:58.895 18:12:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:58.895 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.895 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.895 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:58.895 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.895 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.895 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:58.895 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.895 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.895 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:58.895 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.895 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.895 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:58.895 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.895 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.895 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:58.895 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.895 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.895 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:58.895 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.895 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.895 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:58.895 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.895 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.896 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:59.154 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:59.154 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:59.154 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:59.154 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:59.154 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:59.154 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.154 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:59.154 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:59.412 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.412 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.412 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:59.412 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.412 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.412 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:59.412 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.412 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.412 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:59.412 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.412 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.412 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:59.412 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.412 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.412 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:59.412 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.412 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.412 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:59.412 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.412 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.412 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:59.412 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.412 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.412 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:59.670 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:59.670 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:59.670 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:59.670 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:59.670 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:59.670 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:59.670 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.670 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:59.928 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.928 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.928 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:59.928 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.928 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.928 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:59.928 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.928 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.928 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:59.928 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.928 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.928 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:59.929 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.929 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.929 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:59.929 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.929 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.929 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:59.929 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.929 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.929 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:59.929 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.929 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.929 18:12:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:59.929 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:59.929 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:59.929 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:00.187 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:00.187 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:00.187 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:00.187 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.187 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:00.187 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.187 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.187 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:00.187 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.187 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.187 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:00.187 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.188 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.188 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:00.188 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.188 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.188 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:00.188 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.188 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.188 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:00.188 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.188 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.188 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:00.188 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.188 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.188 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:00.188 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.188 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.188 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:00.446 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:00.446 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:00.446 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.446 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:00.446 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:00.446 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:00.446 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:00.446 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:00.705 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.705 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.705 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.705 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.705 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.705 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.705 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.705 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.705 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.705 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.705 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.705 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.705 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.705 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.705 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.705 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.705 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:00.705 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:00.705 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:00.705 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:00.705 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:00.705 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:00.705 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:00.705 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:00.705 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:00.705 rmmod nvme_rdma 00:07:00.705 rmmod nvme_fabrics 00:07:00.705 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:00.705 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:00.705 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:00.705 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 3286978 ']' 00:07:00.705 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 3286978 00:07:00.705 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 3286978 ']' 00:07:00.705 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 3286978 00:07:00.705 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:07:00.705 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:00.705 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3286978 00:07:00.705 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:00.705 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:00.705 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3286978' 00:07:00.705 killing process with pid 3286978 00:07:00.705 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 3286978 00:07:00.705 18:12:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 3286978 00:07:01.273 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:01.273 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:07:01.273 00:07:01.273 real 0m49.747s 00:07:01.273 user 3m25.847s 00:07:01.273 sys 0m14.481s 00:07:01.273 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.273 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:01.273 ************************************ 00:07:01.273 END TEST nvmf_ns_hotplug_stress 00:07:01.273 ************************************ 00:07:01.273 18:12:14 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:07:01.273 18:12:14 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:01.273 18:12:14 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.273 18:12:14 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:01.273 ************************************ 00:07:01.273 START TEST nvmf_delete_subsystem 00:07:01.273 ************************************ 00:07:01.273 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:07:01.273 * Looking for test storage... 00:07:01.273 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:01.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.274 --rc genhtml_branch_coverage=1 00:07:01.274 --rc genhtml_function_coverage=1 00:07:01.274 --rc genhtml_legend=1 00:07:01.274 --rc geninfo_all_blocks=1 00:07:01.274 --rc geninfo_unexecuted_blocks=1 00:07:01.274 00:07:01.274 ' 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:01.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.274 --rc genhtml_branch_coverage=1 00:07:01.274 --rc genhtml_function_coverage=1 00:07:01.274 --rc genhtml_legend=1 00:07:01.274 --rc geninfo_all_blocks=1 00:07:01.274 --rc geninfo_unexecuted_blocks=1 00:07:01.274 00:07:01.274 ' 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:01.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.274 --rc genhtml_branch_coverage=1 00:07:01.274 --rc genhtml_function_coverage=1 00:07:01.274 --rc genhtml_legend=1 00:07:01.274 --rc geninfo_all_blocks=1 00:07:01.274 --rc geninfo_unexecuted_blocks=1 00:07:01.274 00:07:01.274 ' 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:01.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.274 --rc genhtml_branch_coverage=1 00:07:01.274 --rc genhtml_function_coverage=1 00:07:01.274 --rc genhtml_legend=1 00:07:01.274 --rc geninfo_all_blocks=1 00:07:01.274 --rc geninfo_unexecuted_blocks=1 00:07:01.274 00:07:01.274 ' 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:01.274 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:01.534 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:07:01.534 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:07:01.534 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:01.534 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:01.534 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:01.534 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:01.534 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:01.534 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:01.534 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:01.534 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:01.534 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:01.534 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.534 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.535 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.535 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:01.535 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.535 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:01.535 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:01.535 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:01.535 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:01.535 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:01.535 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:01.535 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:01.535 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:01.535 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:01.535 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:01.535 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:01.535 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:01.535 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:07:01.535 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:01.535 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:01.535 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:01.535 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:01.535 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:01.535 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:01.535 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:01.535 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:01.535 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:01.535 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:01.535 18:12:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:07:08.191 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:07:08.191 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:07:08.191 Found net devices under 0000:18:00.0: mlx_0_0 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:07:08.191 Found net devices under 0000:18:00.1: mlx_0_1 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # rdma_device_init 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # uname 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@528 -- # allocate_nic_ips 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:08.191 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:08.192 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:08.192 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:07:08.192 altname enp24s0f0np0 00:07:08.192 altname ens785f0np0 00:07:08.192 inet 192.168.100.8/24 scope global mlx_0_0 00:07:08.192 valid_lft forever preferred_lft forever 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:08.192 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:08.192 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:07:08.192 altname enp24s0f1np1 00:07:08.192 altname ens785f1np1 00:07:08.192 inet 192.168.100.9/24 scope global mlx_0_1 00:07:08.192 valid_lft forever preferred_lft forever 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:07:08.192 192.168.100.9' 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:07:08.192 192.168.100.9' 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # head -n 1 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:07:08.192 192.168.100.9' 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # tail -n +2 00:07:08.192 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # head -n 1 00:07:08.452 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:08.452 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:07:08.452 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:08.452 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:07:08.452 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:07:08.452 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:07:08.452 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:08.452 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:08.452 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:08.452 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.452 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=3296393 00:07:08.452 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:08.452 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 3296393 00:07:08.452 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 3296393 ']' 00:07:08.452 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.452 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.452 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.452 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.452 18:12:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.452 [2024-10-08 18:12:21.453389] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:07:08.452 [2024-10-08 18:12:21.453452] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:08.452 [2024-10-08 18:12:21.537829] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:08.452 [2024-10-08 18:12:21.624534] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:08.452 [2024-10-08 18:12:21.624592] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:08.452 [2024-10-08 18:12:21.624602] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:08.452 [2024-10-08 18:12:21.624611] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:08.452 [2024-10-08 18:12:21.624618] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:08.711 [2024-10-08 18:12:21.625282] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.711 [2024-10-08 18:12:21.625283] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.280 18:12:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.280 18:12:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:07:09.280 18:12:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:09.280 18:12:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:09.280 18:12:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:09.280 18:12:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:09.280 18:12:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:09.280 18:12:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.280 18:12:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:09.280 [2024-10-08 18:12:22.370517] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1637c20/0x163c110) succeed. 00:07:09.280 [2024-10-08 18:12:22.379496] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1639120/0x167d7b0) succeed. 00:07:09.539 18:12:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.539 18:12:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:09.539 18:12:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.539 18:12:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:09.539 18:12:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.539 18:12:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:09.539 18:12:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.539 18:12:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:09.539 [2024-10-08 18:12:22.484566] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:09.539 18:12:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.539 18:12:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:09.539 18:12:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.539 18:12:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:09.539 NULL1 00:07:09.539 18:12:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.539 18:12:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:09.539 18:12:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.539 18:12:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:09.539 Delay0 00:07:09.539 18:12:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.539 18:12:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.539 18:12:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.539 18:12:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:09.539 18:12:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.539 18:12:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3296591 00:07:09.539 18:12:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:09.539 18:12:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:09.539 [2024-10-08 18:12:22.608807] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:11.446 18:12:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:11.446 18:12:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.446 18:12:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:12.826 NVMe io qpair process completion error 00:07:12.826 NVMe io qpair process completion error 00:07:12.826 NVMe io qpair process completion error 00:07:12.826 NVMe io qpair process completion error 00:07:12.826 NVMe io qpair process completion error 00:07:12.826 NVMe io qpair process completion error 00:07:12.826 NVMe io qpair process completion error 00:07:12.826 18:12:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.826 18:12:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:12.826 18:12:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3296591 00:07:12.826 18:12:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:13.085 18:12:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:13.085 18:12:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3296591 00:07:13.085 18:12:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:13.655 Write completed with error (sct=0, sc=8) 00:07:13.655 starting I/O failed: -6 00:07:13.655 Read completed with error (sct=0, sc=8) 00:07:13.655 starting I/O failed: -6 00:07:13.655 Read completed with error (sct=0, sc=8) 00:07:13.655 starting I/O failed: -6 00:07:13.655 Read completed with error (sct=0, sc=8) 00:07:13.655 starting I/O failed: -6 00:07:13.655 Write completed with error (sct=0, sc=8) 00:07:13.655 starting I/O failed: -6 00:07:13.655 Read completed with error (sct=0, sc=8) 00:07:13.655 starting I/O failed: -6 00:07:13.655 Read completed with error (sct=0, sc=8) 00:07:13.655 starting I/O failed: -6 00:07:13.655 Read completed with error (sct=0, sc=8) 00:07:13.655 starting I/O failed: -6 00:07:13.655 Write completed with error (sct=0, sc=8) 00:07:13.655 starting I/O failed: -6 00:07:13.655 Read completed with error (sct=0, sc=8) 00:07:13.655 starting I/O failed: -6 00:07:13.655 Write completed with error (sct=0, sc=8) 00:07:13.655 starting I/O failed: -6 00:07:13.655 Write completed with error (sct=0, sc=8) 00:07:13.655 starting I/O failed: -6 00:07:13.655 Read completed with error (sct=0, sc=8) 00:07:13.655 starting I/O failed: -6 00:07:13.655 Read completed with error (sct=0, sc=8) 00:07:13.655 starting I/O failed: -6 00:07:13.655 Write completed with error (sct=0, sc=8) 00:07:13.655 starting I/O failed: -6 00:07:13.655 Write completed with error (sct=0, sc=8) 00:07:13.655 starting I/O failed: -6 00:07:13.655 Read completed with error (sct=0, sc=8) 00:07:13.655 starting I/O failed: -6 00:07:13.655 Read completed with error (sct=0, sc=8) 00:07:13.655 starting I/O failed: -6 00:07:13.655 Write completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Write completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Write completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Write completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Write completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Write completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Write completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Write completed with error (sct=0, sc=8) 00:07:13.656 Write completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Write completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Write completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Write completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Write completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Write completed with error (sct=0, sc=8) 00:07:13.656 Write completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Write completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Write completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Write completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Write completed with error (sct=0, sc=8) 00:07:13.656 Write completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Write completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Write completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Write completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Write completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Write completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Write completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Write completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Write completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Write completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Write completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Write completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 starting I/O failed: -6 00:07:13.656 Write completed with error (sct=0, sc=8) 00:07:13.656 Write completed with error (sct=0, sc=8) 00:07:13.656 Write completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Write completed with error (sct=0, sc=8) 00:07:13.656 Write completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Write completed with error (sct=0, sc=8) 00:07:13.656 Write completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Write completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.656 Write completed with error (sct=0, sc=8) 00:07:13.656 Read completed with error (sct=0, sc=8) 00:07:13.657 Write completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Write completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Write completed with error (sct=0, sc=8) 00:07:13.657 Write completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Write completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Write completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Write completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Write completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Write completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Write completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Write completed with error (sct=0, sc=8) 00:07:13.657 Write completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Write completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Write completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Write completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Write completed with error (sct=0, sc=8) 00:07:13.657 Read completed with error (sct=0, sc=8) 00:07:13.657 Initializing NVMe Controllers 00:07:13.657 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:07:13.657 Controller IO queue size 128, less than required. 00:07:13.657 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:13.657 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:13.657 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:13.657 Initialization complete. Launching workers. 00:07:13.657 ======================================================== 00:07:13.657 Latency(us) 00:07:13.657 Device Information : IOPS MiB/s Average min max 00:07:13.657 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.61 0.04 1591864.67 1000152.77 2969385.53 00:07:13.657 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.61 0.04 1593343.35 1000841.53 2970596.39 00:07:13.657 ======================================================== 00:07:13.657 Total : 161.22 0.08 1592604.01 1000152.77 2970596.39 00:07:13.657 00:07:13.657 [2024-10-08 18:12:26.700914] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:07:13.657 18:12:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:13.657 18:12:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3296591 00:07:13.657 18:12:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:13.657 [2024-10-08 18:12:26.715383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:07:13.657 [2024-10-08 18:12:26.715405] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:07:13.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:14.226 18:12:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:14.226 18:12:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3296591 00:07:14.226 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3296591) - No such process 00:07:14.226 18:12:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3296591 00:07:14.226 18:12:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:14.226 18:12:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3296591 00:07:14.226 18:12:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:14.226 18:12:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:14.226 18:12:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:14.226 18:12:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:14.226 18:12:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3296591 00:07:14.226 18:12:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:14.226 18:12:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:14.226 18:12:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:14.226 18:12:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:14.226 18:12:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:14.226 18:12:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.226 18:12:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:14.226 18:12:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.226 18:12:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:14.226 18:12:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.226 18:12:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:14.226 [2024-10-08 18:12:27.243742] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:14.226 18:12:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.226 18:12:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.226 18:12:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.226 18:12:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:14.226 18:12:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.226 18:12:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3297151 00:07:14.227 18:12:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:14.227 18:12:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:14.227 18:12:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3297151 00:07:14.227 18:12:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:14.227 [2024-10-08 18:12:27.353564] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:14.795 18:12:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:14.795 18:12:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3297151 00:07:14.795 18:12:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:15.362 18:12:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:15.362 18:12:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3297151 00:07:15.362 18:12:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:15.621 18:12:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:15.621 18:12:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3297151 00:07:15.621 18:12:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:16.190 18:12:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:16.190 18:12:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3297151 00:07:16.190 18:12:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:16.759 18:12:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:16.759 18:12:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3297151 00:07:16.759 18:12:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:17.328 18:12:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:17.328 18:12:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3297151 00:07:17.328 18:12:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:17.897 18:12:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:17.897 18:12:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3297151 00:07:17.897 18:12:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:18.160 18:12:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:18.160 18:12:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3297151 00:07:18.160 18:12:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:18.730 18:12:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:18.730 18:12:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3297151 00:07:18.730 18:12:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:19.298 18:12:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:19.298 18:12:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3297151 00:07:19.299 18:12:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:19.867 18:12:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:19.867 18:12:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3297151 00:07:19.867 18:12:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:20.436 18:12:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:20.436 18:12:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3297151 00:07:20.436 18:12:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:20.695 18:12:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:20.695 18:12:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3297151 00:07:20.695 18:12:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:21.264 18:12:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:21.264 18:12:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3297151 00:07:21.264 18:12:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:21.523 Initializing NVMe Controllers 00:07:21.523 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:07:21.523 Controller IO queue size 128, less than required. 00:07:21.523 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:21.523 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:21.523 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:21.523 Initialization complete. Launching workers. 00:07:21.523 ======================================================== 00:07:21.523 Latency(us) 00:07:21.523 Device Information : IOPS MiB/s Average min max 00:07:21.523 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001555.12 1000064.33 1004734.77 00:07:21.523 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002626.04 1000129.10 1007011.95 00:07:21.523 ======================================================== 00:07:21.523 Total : 256.00 0.12 1002090.58 1000064.33 1007011.95 00:07:21.523 00:07:21.783 18:12:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:21.783 18:12:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3297151 00:07:21.783 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3297151) - No such process 00:07:21.783 18:12:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3297151 00:07:21.783 18:12:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:21.783 18:12:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:21.783 18:12:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:21.783 18:12:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:21.783 18:12:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:21.783 18:12:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:21.783 18:12:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:21.783 18:12:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:21.783 18:12:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:21.783 rmmod nvme_rdma 00:07:21.783 rmmod nvme_fabrics 00:07:21.783 18:12:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:21.783 18:12:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:21.783 18:12:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:21.783 18:12:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 3296393 ']' 00:07:21.783 18:12:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 3296393 00:07:21.783 18:12:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 3296393 ']' 00:07:21.783 18:12:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 3296393 00:07:21.783 18:12:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:07:21.783 18:12:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:21.783 18:12:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3296393 00:07:22.042 18:12:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:22.042 18:12:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:22.042 18:12:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3296393' 00:07:22.042 killing process with pid 3296393 00:07:22.042 18:12:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 3296393 00:07:22.042 18:12:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 3296393 00:07:22.302 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:22.302 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:07:22.302 00:07:22.302 real 0m20.985s 00:07:22.302 user 0m50.540s 00:07:22.302 sys 0m6.554s 00:07:22.302 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.302 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.302 ************************************ 00:07:22.302 END TEST nvmf_delete_subsystem 00:07:22.302 ************************************ 00:07:22.302 18:12:35 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:07:22.302 18:12:35 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:22.302 18:12:35 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.302 18:12:35 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:22.302 ************************************ 00:07:22.302 START TEST nvmf_host_management 00:07:22.302 ************************************ 00:07:22.302 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:07:22.302 * Looking for test storage... 00:07:22.302 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:22.302 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:22.302 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:07:22.302 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:22.562 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:22.562 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:22.562 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:22.562 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:22.562 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.562 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:22.562 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:22.562 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:22.562 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:22.562 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:22.562 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:22.562 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:22.562 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:22.562 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:22.562 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:22.562 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.562 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:22.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.563 --rc genhtml_branch_coverage=1 00:07:22.563 --rc genhtml_function_coverage=1 00:07:22.563 --rc genhtml_legend=1 00:07:22.563 --rc geninfo_all_blocks=1 00:07:22.563 --rc geninfo_unexecuted_blocks=1 00:07:22.563 00:07:22.563 ' 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:22.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.563 --rc genhtml_branch_coverage=1 00:07:22.563 --rc genhtml_function_coverage=1 00:07:22.563 --rc genhtml_legend=1 00:07:22.563 --rc geninfo_all_blocks=1 00:07:22.563 --rc geninfo_unexecuted_blocks=1 00:07:22.563 00:07:22.563 ' 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:22.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.563 --rc genhtml_branch_coverage=1 00:07:22.563 --rc genhtml_function_coverage=1 00:07:22.563 --rc genhtml_legend=1 00:07:22.563 --rc geninfo_all_blocks=1 00:07:22.563 --rc geninfo_unexecuted_blocks=1 00:07:22.563 00:07:22.563 ' 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:22.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.563 --rc genhtml_branch_coverage=1 00:07:22.563 --rc genhtml_function_coverage=1 00:07:22.563 --rc genhtml_legend=1 00:07:22.563 --rc geninfo_all_blocks=1 00:07:22.563 --rc geninfo_unexecuted_blocks=1 00:07:22.563 00:07:22.563 ' 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:22.563 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:22.563 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:22.564 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:22.564 18:12:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.142 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:29.142 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:29.142 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:29.142 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:29.142 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:29.142 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:29.142 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:29.142 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:29.142 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:29.142 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:29.142 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:29.142 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:29.142 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:29.142 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:29.142 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:29.142 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:29.142 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:29.142 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:29.142 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:29.142 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:29.142 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:29.142 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:29.142 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:29.142 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:29.142 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:29.142 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:07:29.143 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:07:29.143 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:07:29.143 Found net devices under 0000:18:00.0: mlx_0_0 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:07:29.143 Found net devices under 0000:18:00.1: mlx_0_1 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # rdma_device_init 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # uname 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@528 -- # allocate_nic_ips 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:29.143 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:29.403 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:29.403 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:29.403 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:29.403 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:29.403 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:29.403 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:07:29.403 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:29.403 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:29.403 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:29.403 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:29.403 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:29.403 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:29.403 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:07:29.403 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:29.403 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:29.403 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:29.403 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:29.403 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:29.403 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:29.403 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:29.403 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:29.403 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:29.403 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:29.403 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:07:29.403 altname enp24s0f0np0 00:07:29.403 altname ens785f0np0 00:07:29.403 inet 192.168.100.8/24 scope global mlx_0_0 00:07:29.403 valid_lft forever preferred_lft forever 00:07:29.403 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:29.403 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:29.404 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:29.404 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:07:29.404 altname enp24s0f1np1 00:07:29.404 altname ens785f1np1 00:07:29.404 inet 192.168.100.9/24 scope global mlx_0_1 00:07:29.404 valid_lft forever preferred_lft forever 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:07:29.404 192.168.100.9' 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:07:29.404 192.168.100.9' 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # head -n 1 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:07:29.404 192.168.100.9' 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # tail -n +2 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # head -n 1 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=3301165 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 3301165 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3301165 ']' 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:29.404 18:12:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.404 [2024-10-08 18:12:42.543521] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:07:29.404 [2024-10-08 18:12:42.543587] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:29.664 [2024-10-08 18:12:42.629301] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:29.664 [2024-10-08 18:12:42.719816] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:29.664 [2024-10-08 18:12:42.719858] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:29.664 [2024-10-08 18:12:42.719868] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:29.664 [2024-10-08 18:12:42.719878] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:29.664 [2024-10-08 18:12:42.719885] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:29.664 [2024-10-08 18:12:42.721284] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.664 [2024-10-08 18:12:42.721388] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:07:29.664 [2024-10-08 18:12:42.721471] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:07:29.664 [2024-10-08 18:12:42.721472] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.233 18:12:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:30.233 18:12:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:30.233 18:12:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:30.233 18:12:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:30.233 18:12:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:30.534 18:12:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:30.534 18:12:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:30.534 18:12:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.534 18:12:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:30.534 [2024-10-08 18:12:43.474732] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x91b5e0/0x91fad0) succeed. 00:07:30.534 [2024-10-08 18:12:43.485587] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x91cc20/0x961170) succeed. 00:07:30.534 18:12:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.534 18:12:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:30.534 18:12:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:30.534 18:12:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:30.534 18:12:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:30.534 18:12:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:30.534 18:12:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:30.534 18:12:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.534 18:12:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:30.534 Malloc0 00:07:30.535 [2024-10-08 18:12:43.673500] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:30.535 18:12:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.535 18:12:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:30.535 18:12:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:30.535 18:12:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:30.795 18:12:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3301394 00:07:30.795 18:12:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3301394 /var/tmp/bdevperf.sock 00:07:30.795 18:12:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:30.795 18:12:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3301394 ']' 00:07:30.795 18:12:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:30.795 18:12:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:30.795 18:12:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:30.795 18:12:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:07:30.795 18:12:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:30.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:30.795 18:12:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:07:30.795 18:12:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:30.795 18:12:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:30.795 18:12:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:30.795 18:12:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:30.795 { 00:07:30.795 "params": { 00:07:30.795 "name": "Nvme$subsystem", 00:07:30.795 "trtype": "$TEST_TRANSPORT", 00:07:30.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:30.796 "adrfam": "ipv4", 00:07:30.796 "trsvcid": "$NVMF_PORT", 00:07:30.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:30.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:30.796 "hdgst": ${hdgst:-false}, 00:07:30.796 "ddgst": ${ddgst:-false} 00:07:30.796 }, 00:07:30.796 "method": "bdev_nvme_attach_controller" 00:07:30.796 } 00:07:30.796 EOF 00:07:30.796 )") 00:07:30.796 18:12:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:07:30.796 18:12:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:07:30.796 18:12:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:07:30.796 18:12:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:30.796 "params": { 00:07:30.796 "name": "Nvme0", 00:07:30.796 "trtype": "rdma", 00:07:30.796 "traddr": "192.168.100.8", 00:07:30.796 "adrfam": "ipv4", 00:07:30.796 "trsvcid": "4420", 00:07:30.796 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:30.796 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:30.796 "hdgst": false, 00:07:30.796 "ddgst": false 00:07:30.796 }, 00:07:30.796 "method": "bdev_nvme_attach_controller" 00:07:30.796 }' 00:07:30.796 [2024-10-08 18:12:43.771337] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:07:30.796 [2024-10-08 18:12:43.771395] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3301394 ] 00:07:30.796 [2024-10-08 18:12:43.858532] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.796 [2024-10-08 18:12:43.940598] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.055 Running I/O for 10 seconds... 00:07:31.623 18:12:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:31.623 18:12:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:31.623 18:12:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:31.623 18:12:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.623 18:12:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:31.623 18:12:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.623 18:12:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:31.623 18:12:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:31.623 18:12:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:31.623 18:12:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:31.623 18:12:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:31.623 18:12:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:31.623 18:12:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:31.623 18:12:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:31.623 18:12:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:31.623 18:12:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:31.623 18:12:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.623 18:12:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:31.623 18:12:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.623 18:12:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1643 00:07:31.623 18:12:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1643 -ge 100 ']' 00:07:31.623 18:12:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:31.623 18:12:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:31.624 18:12:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:31.624 18:12:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:31.624 18:12:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.624 18:12:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:31.624 18:12:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.624 18:12:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:31.624 18:12:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.624 18:12:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:31.624 18:12:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.624 18:12:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:32.764 1768.00 IOPS, 110.50 MiB/s [2024-10-08T16:12:45.937Z] [2024-10-08 18:12:45.746558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138d4500 len:0x10000 key:0x182500 00:07:32.764 [2024-10-08 18:12:45.746598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.764 [2024-10-08 18:12:45.746620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138c4480 len:0x10000 key:0x182500 00:07:32.764 [2024-10-08 18:12:45.746631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.764 [2024-10-08 18:12:45.746644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138b4400 len:0x10000 key:0x182500 00:07:32.764 [2024-10-08 18:12:45.746654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.764 [2024-10-08 18:12:45.746665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138a4380 len:0x10000 key:0x182500 00:07:32.764 [2024-10-08 18:12:45.746681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.764 [2024-10-08 18:12:45.746693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013894300 len:0x10000 key:0x182500 00:07:32.764 [2024-10-08 18:12:45.746702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.764 [2024-10-08 18:12:45.746714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013884280 len:0x10000 key:0x182500 00:07:32.764 [2024-10-08 18:12:45.746724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.764 [2024-10-08 18:12:45.746736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013874200 len:0x10000 key:0x182500 00:07:32.764 [2024-10-08 18:12:45.746747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.764 [2024-10-08 18:12:45.746759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013864180 len:0x10000 key:0x182500 00:07:32.764 [2024-10-08 18:12:45.746769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.764 [2024-10-08 18:12:45.746780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013854100 len:0x10000 key:0x182500 00:07:32.764 [2024-10-08 18:12:45.746790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.764 [2024-10-08 18:12:45.746801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013844080 len:0x10000 key:0x182500 00:07:32.764 [2024-10-08 18:12:45.746813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.764 [2024-10-08 18:12:45.746824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013834000 len:0x10000 key:0x182500 00:07:32.764 [2024-10-08 18:12:45.746833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.764 [2024-10-08 18:12:45.746846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013823f80 len:0x10000 key:0x182500 00:07:32.764 [2024-10-08 18:12:45.746855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.764 [2024-10-08 18:12:45.746866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013813f00 len:0x10000 key:0x182500 00:07:32.764 [2024-10-08 18:12:45.746878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.764 [2024-10-08 18:12:45.746889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013803e80 len:0x10000 key:0x182500 00:07:32.764 [2024-10-08 18:12:45.746900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.764 [2024-10-08 18:12:45.746912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ad1e80 len:0x10000 key:0x182800 00:07:32.764 [2024-10-08 18:12:45.746924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.764 [2024-10-08 18:12:45.746936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ac1e00 len:0x10000 key:0x182800 00:07:32.764 [2024-10-08 18:12:45.746945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.764 [2024-10-08 18:12:45.746959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ab1d80 len:0x10000 key:0x182800 00:07:32.764 [2024-10-08 18:12:45.746971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.764 [2024-10-08 18:12:45.746985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019aa1d00 len:0x10000 key:0x182800 00:07:32.764 [2024-10-08 18:12:45.746995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.764 [2024-10-08 18:12:45.747012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a91c80 len:0x10000 key:0x182800 00:07:32.764 [2024-10-08 18:12:45.747022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.764 [2024-10-08 18:12:45.747034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a81c00 len:0x10000 key:0x182800 00:07:32.764 [2024-10-08 18:12:45.747044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.764 [2024-10-08 18:12:45.747055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a71b80 len:0x10000 key:0x182800 00:07:32.764 [2024-10-08 18:12:45.747068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.764 [2024-10-08 18:12:45.747080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a61b00 len:0x10000 key:0x182800 00:07:32.764 [2024-10-08 18:12:45.747091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.764 [2024-10-08 18:12:45.747103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:101120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a51a80 len:0x10000 key:0x182800 00:07:32.764 [2024-10-08 18:12:45.747114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.764 [2024-10-08 18:12:45.747126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:101248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a41a00 len:0x10000 key:0x182800 00:07:32.764 [2024-10-08 18:12:45.747137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.764 [2024-10-08 18:12:45.747149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:101376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a31980 len:0x10000 key:0x182800 00:07:32.764 [2024-10-08 18:12:45.747159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.764 [2024-10-08 18:12:45.747170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:101504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a21900 len:0x10000 key:0x182800 00:07:32.764 [2024-10-08 18:12:45.747182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.764 [2024-10-08 18:12:45.747193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:101632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a11880 len:0x10000 key:0x182800 00:07:32.764 [2024-10-08 18:12:45.747203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.764 [2024-10-08 18:12:45.747215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:101760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a01800 len:0x10000 key:0x182800 00:07:32.764 [2024-10-08 18:12:45.747225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.764 [2024-10-08 18:12:45.747236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:101888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198eff80 len:0x10000 key:0x182700 00:07:32.764 [2024-10-08 18:12:45.747246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.764 [2024-10-08 18:12:45.747257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:102016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198dff00 len:0x10000 key:0x182700 00:07:32.764 [2024-10-08 18:12:45.747268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.764 [2024-10-08 18:12:45.747279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:102144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198cfe80 len:0x10000 key:0x182700 00:07:32.764 [2024-10-08 18:12:45.747291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.764 [2024-10-08 18:12:45.747302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:102272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198bfe00 len:0x10000 key:0x182700 00:07:32.764 [2024-10-08 18:12:45.747313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.764 [2024-10-08 18:12:45.747328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:102400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198afd80 len:0x10000 key:0x182700 00:07:32.764 [2024-10-08 18:12:45.747339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.764 [2024-10-08 18:12:45.747351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:102528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001989fd00 len:0x10000 key:0x182700 00:07:32.764 [2024-10-08 18:12:45.747361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.765 [2024-10-08 18:12:45.747372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:102656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001988fc80 len:0x10000 key:0x182700 00:07:32.765 [2024-10-08 18:12:45.747383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.765 [2024-10-08 18:12:45.747395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:102784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001987fc00 len:0x10000 key:0x182700 00:07:32.765 [2024-10-08 18:12:45.747404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.765 [2024-10-08 18:12:45.747415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:102912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001986fb80 len:0x10000 key:0x182700 00:07:32.765 [2024-10-08 18:12:45.747425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.765 [2024-10-08 18:12:45.747438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:103040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001985fb00 len:0x10000 key:0x182700 00:07:32.765 [2024-10-08 18:12:45.747447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.765 [2024-10-08 18:12:45.747458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:103168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001984fa80 len:0x10000 key:0x182700 00:07:32.765 [2024-10-08 18:12:45.747469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.765 [2024-10-08 18:12:45.747480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:103296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001983fa00 len:0x10000 key:0x182700 00:07:32.765 [2024-10-08 18:12:45.747489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.765 [2024-10-08 18:12:45.747500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bbfe000 len:0x10000 key:0x182400 00:07:32.765 [2024-10-08 18:12:45.747510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.765 [2024-10-08 18:12:45.747521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc1f000 len:0x10000 key:0x182400 00:07:32.765 [2024-10-08 18:12:45.747530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.765 [2024-10-08 18:12:45.747542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be2f000 len:0x10000 key:0x182400 00:07:32.765 [2024-10-08 18:12:45.747552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.765 [2024-10-08 18:12:45.747563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be0e000 len:0x10000 key:0x182400 00:07:32.765 [2024-10-08 18:12:45.747572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.765 [2024-10-08 18:12:45.747583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bded000 len:0x10000 key:0x182400 00:07:32.765 [2024-10-08 18:12:45.747592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.765 [2024-10-08 18:12:45.747603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bdcc000 len:0x10000 key:0x182400 00:07:32.765 [2024-10-08 18:12:45.747613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.765 [2024-10-08 18:12:45.747624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bcc4000 len:0x10000 key:0x182400 00:07:32.765 [2024-10-08 18:12:45.747634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.765 [2024-10-08 18:12:45.747646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bca3000 len:0x10000 key:0x182400 00:07:32.765 [2024-10-08 18:12:45.747655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.765 [2024-10-08 18:12:45.747668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e55f000 len:0x10000 key:0x182400 00:07:32.765 [2024-10-08 18:12:45.747677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.765 [2024-10-08 18:12:45.747688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e53e000 len:0x10000 key:0x182400 00:07:32.765 [2024-10-08 18:12:45.747698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.765 [2024-10-08 18:12:45.747710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e51d000 len:0x10000 key:0x182400 00:07:32.765 [2024-10-08 18:12:45.747719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.765 [2024-10-08 18:12:45.747731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e4fc000 len:0x10000 key:0x182400 00:07:32.765 [2024-10-08 18:12:45.747740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.765 [2024-10-08 18:12:45.747751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e4db000 len:0x10000 key:0x182400 00:07:32.765 [2024-10-08 18:12:45.747760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.765 [2024-10-08 18:12:45.747772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e4ba000 len:0x10000 key:0x182400 00:07:32.765 [2024-10-08 18:12:45.747783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.765 [2024-10-08 18:12:45.747793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e499000 len:0x10000 key:0x182400 00:07:32.765 [2024-10-08 18:12:45.747803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.765 [2024-10-08 18:12:45.747814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:97152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e478000 len:0x10000 key:0x182400 00:07:32.765 [2024-10-08 18:12:45.747824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.765 [2024-10-08 18:12:45.747836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e457000 len:0x10000 key:0x182400 00:07:32.765 [2024-10-08 18:12:45.747845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.765 [2024-10-08 18:12:45.747856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e436000 len:0x10000 key:0x182400 00:07:32.765 [2024-10-08 18:12:45.747866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.765 [2024-10-08 18:12:45.747878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e415000 len:0x10000 key:0x182400 00:07:32.765 [2024-10-08 18:12:45.747888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.765 [2024-10-08 18:12:45.747899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e3f4000 len:0x10000 key:0x182400 00:07:32.765 [2024-10-08 18:12:45.747910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.765 [2024-10-08 18:12:45.747923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e3d3000 len:0x10000 key:0x182400 00:07:32.765 [2024-10-08 18:12:45.747934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.765 [2024-10-08 18:12:45.747946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:97920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e3b2000 len:0x10000 key:0x182400 00:07:32.765 [2024-10-08 18:12:45.747956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.765 [2024-10-08 18:12:45.747966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e391000 len:0x10000 key:0x182400 00:07:32.765 [2024-10-08 18:12:45.747976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.765 [2024-10-08 18:12:45.747987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e370000 len:0x10000 key:0x182400 00:07:32.765 [2024-10-08 18:12:45.748004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f43ed000 sqhd:7250 p:0 m:0 dnr:0 00:07:32.765 [2024-10-08 18:12:45.749814] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019a01540 was disconnected and freed. reset controller. 00:07:32.765 [2024-10-08 18:12:45.750742] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:32.765 18:12:45 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3301394 00:07:32.765 18:12:45 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:32.765 18:12:45 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:32.765 18:12:45 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:32.765 18:12:45 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:07:32.765 18:12:45 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:07:32.765 18:12:45 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:32.765 18:12:45 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:32.765 { 00:07:32.765 "params": { 00:07:32.765 "name": "Nvme$subsystem", 00:07:32.765 "trtype": "$TEST_TRANSPORT", 00:07:32.765 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:32.765 "adrfam": "ipv4", 00:07:32.765 "trsvcid": "$NVMF_PORT", 00:07:32.765 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:32.765 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:32.765 "hdgst": ${hdgst:-false}, 00:07:32.765 "ddgst": ${ddgst:-false} 00:07:32.765 }, 00:07:32.765 "method": "bdev_nvme_attach_controller" 00:07:32.765 } 00:07:32.765 EOF 00:07:32.765 )") 00:07:32.765 18:12:45 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:07:32.765 18:12:45 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:07:32.765 18:12:45 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:07:32.765 18:12:45 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:32.765 "params": { 00:07:32.765 "name": "Nvme0", 00:07:32.765 "trtype": "rdma", 00:07:32.765 "traddr": "192.168.100.8", 00:07:32.765 "adrfam": "ipv4", 00:07:32.765 "trsvcid": "4420", 00:07:32.765 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:32.765 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:32.765 "hdgst": false, 00:07:32.765 "ddgst": false 00:07:32.765 }, 00:07:32.765 "method": "bdev_nvme_attach_controller" 00:07:32.765 }' 00:07:32.765 [2024-10-08 18:12:45.804109] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:07:32.765 [2024-10-08 18:12:45.804160] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3301684 ] 00:07:32.765 [2024-10-08 18:12:45.887176] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.025 [2024-10-08 18:12:45.969383] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.025 Running I/O for 1 seconds... 00:07:34.407 2985.00 IOPS, 186.56 MiB/s 00:07:34.407 Latency(us) 00:07:34.407 [2024-10-08T16:12:47.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:34.407 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:34.407 Verification LBA range: start 0x0 length 0x400 00:07:34.407 Nvme0n1 : 1.01 3034.80 189.67 0.00 0.00 20663.46 680.29 42398.94 00:07:34.407 [2024-10-08T16:12:47.580Z] =================================================================================================================== 00:07:34.407 [2024-10-08T16:12:47.580Z] Total : 3034.80 189.67 0.00 0.00 20663.46 680.29 42398.94 00:07:34.407 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 3301394 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:07:34.407 18:12:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:34.407 18:12:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:34.407 18:12:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:34.407 18:12:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:34.407 18:12:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:34.407 18:12:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:34.407 18:12:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:34.407 18:12:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:34.407 18:12:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:34.407 18:12:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:34.407 18:12:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:34.407 18:12:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:34.407 rmmod nvme_rdma 00:07:34.407 rmmod nvme_fabrics 00:07:34.407 18:12:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:34.407 18:12:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:34.407 18:12:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:34.407 18:12:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 3301165 ']' 00:07:34.407 18:12:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 3301165 00:07:34.407 18:12:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 3301165 ']' 00:07:34.407 18:12:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 3301165 00:07:34.407 18:12:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:34.407 18:12:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:34.407 18:12:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3301165 00:07:34.407 18:12:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:34.407 18:12:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:34.407 18:12:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3301165' 00:07:34.407 killing process with pid 3301165 00:07:34.407 18:12:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 3301165 00:07:34.407 18:12:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 3301165 00:07:34.667 [2024-10-08 18:12:47.825492] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:34.926 18:12:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:34.926 18:12:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:07:34.926 18:12:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:34.926 00:07:34.926 real 0m12.534s 00:07:34.926 user 0m25.894s 00:07:34.926 sys 0m6.508s 00:07:34.926 18:12:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.926 18:12:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:34.926 ************************************ 00:07:34.926 END TEST nvmf_host_management 00:07:34.926 ************************************ 00:07:34.926 18:12:47 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:07:34.926 18:12:47 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:34.926 18:12:47 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.926 18:12:47 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:34.926 ************************************ 00:07:34.926 START TEST nvmf_lvol 00:07:34.927 ************************************ 00:07:34.927 18:12:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:07:34.927 * Looking for test storage... 00:07:34.927 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:34.927 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:34.927 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:07:34.927 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:35.186 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:35.186 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.186 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.186 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.186 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.186 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.186 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.186 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.186 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.186 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.186 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.186 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.186 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:35.186 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:35.186 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.186 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.186 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:35.186 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:35.186 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.186 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:35.186 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.186 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:35.186 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:35.186 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.186 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:35.186 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.186 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.186 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.186 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:35.186 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.186 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:35.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.186 --rc genhtml_branch_coverage=1 00:07:35.186 --rc genhtml_function_coverage=1 00:07:35.186 --rc genhtml_legend=1 00:07:35.186 --rc geninfo_all_blocks=1 00:07:35.186 --rc geninfo_unexecuted_blocks=1 00:07:35.186 00:07:35.186 ' 00:07:35.186 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:35.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.186 --rc genhtml_branch_coverage=1 00:07:35.186 --rc genhtml_function_coverage=1 00:07:35.186 --rc genhtml_legend=1 00:07:35.186 --rc geninfo_all_blocks=1 00:07:35.186 --rc geninfo_unexecuted_blocks=1 00:07:35.186 00:07:35.186 ' 00:07:35.186 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:35.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.186 --rc genhtml_branch_coverage=1 00:07:35.186 --rc genhtml_function_coverage=1 00:07:35.186 --rc genhtml_legend=1 00:07:35.186 --rc geninfo_all_blocks=1 00:07:35.187 --rc geninfo_unexecuted_blocks=1 00:07:35.187 00:07:35.187 ' 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:35.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.187 --rc genhtml_branch_coverage=1 00:07:35.187 --rc genhtml_function_coverage=1 00:07:35.187 --rc genhtml_legend=1 00:07:35.187 --rc geninfo_all_blocks=1 00:07:35.187 --rc geninfo_unexecuted_blocks=1 00:07:35.187 00:07:35.187 ' 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:35.187 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:35.187 18:12:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:07:41.796 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:07:41.796 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:07:41.796 Found net devices under 0000:18:00.0: mlx_0_0 00:07:41.796 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:07:41.797 Found net devices under 0000:18:00.1: mlx_0_1 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # rdma_device_init 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # uname 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@528 -- # allocate_nic_ips 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:41.797 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:41.797 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:07:41.797 altname enp24s0f0np0 00:07:41.797 altname ens785f0np0 00:07:41.797 inet 192.168.100.8/24 scope global mlx_0_0 00:07:41.797 valid_lft forever preferred_lft forever 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:41.797 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:41.797 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:07:41.797 altname enp24s0f1np1 00:07:41.797 altname ens785f1np1 00:07:41.797 inet 192.168.100.9/24 scope global mlx_0_1 00:07:41.797 valid_lft forever preferred_lft forever 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:41.797 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:42.057 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:42.057 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:42.057 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:42.057 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:42.057 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:42.057 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:07:42.057 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:42.057 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:42.057 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:42.057 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:42.057 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:42.057 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:42.057 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:07:42.057 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:42.057 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:42.057 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:42.057 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:42.057 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:42.057 18:12:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:42.057 18:12:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:42.057 18:12:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:42.057 18:12:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:42.057 18:12:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:42.057 18:12:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:42.057 18:12:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:42.057 18:12:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:07:42.057 192.168.100.9' 00:07:42.057 18:12:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:07:42.057 192.168.100.9' 00:07:42.057 18:12:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # head -n 1 00:07:42.057 18:12:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:42.057 18:12:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:07:42.057 192.168.100.9' 00:07:42.058 18:12:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # tail -n +2 00:07:42.058 18:12:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # head -n 1 00:07:42.058 18:12:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:42.058 18:12:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:07:42.058 18:12:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:42.058 18:12:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:07:42.058 18:12:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:07:42.058 18:12:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:07:42.058 18:12:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:42.058 18:12:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:42.058 18:12:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:42.058 18:12:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:42.058 18:12:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=3304889 00:07:42.058 18:12:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:42.058 18:12:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 3304889 00:07:42.058 18:12:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 3304889 ']' 00:07:42.058 18:12:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.058 18:12:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:42.058 18:12:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.058 18:12:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:42.058 18:12:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:42.058 [2024-10-08 18:12:55.123587] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:07:42.058 [2024-10-08 18:12:55.123649] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:42.058 [2024-10-08 18:12:55.211584] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:42.318 [2024-10-08 18:12:55.302752] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:42.318 [2024-10-08 18:12:55.302793] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:42.318 [2024-10-08 18:12:55.302803] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:42.318 [2024-10-08 18:12:55.302812] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:42.318 [2024-10-08 18:12:55.302819] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:42.318 [2024-10-08 18:12:55.303662] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.318 [2024-10-08 18:12:55.303693] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.318 [2024-10-08 18:12:55.303694] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.888 18:12:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:42.888 18:12:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:42.888 18:12:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:42.888 18:12:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:42.888 18:12:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:42.888 18:12:56 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:42.888 18:12:56 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:43.148 [2024-10-08 18:12:56.232036] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x24c07b0/0x24c4ca0) succeed. 00:07:43.148 [2024-10-08 18:12:56.242979] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x24c1d50/0x2506340) succeed. 00:07:43.407 18:12:56 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:43.667 18:12:56 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:43.667 18:12:56 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:43.667 18:12:56 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:43.667 18:12:56 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:43.927 18:12:57 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:44.187 18:12:57 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=d7422931-2cef-4dc3-81af-7287abd21c06 00:07:44.187 18:12:57 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d7422931-2cef-4dc3-81af-7287abd21c06 lvol 20 00:07:44.447 18:12:57 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=2221c38c-2fb8-432d-99ba-f1b33c41e97c 00:07:44.447 18:12:57 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:44.706 18:12:57 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2221c38c-2fb8-432d-99ba-f1b33c41e97c 00:07:44.706 18:12:57 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:07:44.966 [2024-10-08 18:12:58.021564] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:44.966 18:12:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:45.225 18:12:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3305378 00:07:45.225 18:12:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:45.225 18:12:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:46.164 18:12:59 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 2221c38c-2fb8-432d-99ba-f1b33c41e97c MY_SNAPSHOT 00:07:46.424 18:12:59 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=4d556fa5-699b-4ed3-96e1-7b6039883b58 00:07:46.424 18:12:59 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 2221c38c-2fb8-432d-99ba-f1b33c41e97c 30 00:07:46.683 18:12:59 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 4d556fa5-699b-4ed3-96e1-7b6039883b58 MY_CLONE 00:07:46.942 18:12:59 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=58259230-b1f0-4583-8fce-949239507fd2 00:07:46.942 18:12:59 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 58259230-b1f0-4583-8fce-949239507fd2 00:07:47.211 18:13:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3305378 00:07:57.208 Initializing NVMe Controllers 00:07:57.208 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:07:57.208 Controller IO queue size 128, less than required. 00:07:57.208 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:57.208 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:57.208 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:57.208 Initialization complete. Launching workers. 00:07:57.208 ======================================================== 00:07:57.208 Latency(us) 00:07:57.208 Device Information : IOPS MiB/s Average min max 00:07:57.208 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16448.90 64.25 7783.63 2252.85 53351.56 00:07:57.208 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16478.60 64.37 7769.11 3722.93 49364.52 00:07:57.208 ======================================================== 00:07:57.208 Total : 32927.49 128.62 7776.36 2252.85 53351.56 00:07:57.208 00:07:57.208 18:13:09 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:57.208 18:13:09 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2221c38c-2fb8-432d-99ba-f1b33c41e97c 00:07:57.208 18:13:10 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d7422931-2cef-4dc3-81af-7287abd21c06 00:07:57.208 18:13:10 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:57.208 18:13:10 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:57.208 18:13:10 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:57.208 18:13:10 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:57.208 18:13:10 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:57.208 18:13:10 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:57.208 18:13:10 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:57.208 18:13:10 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:57.208 18:13:10 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:57.208 18:13:10 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:57.208 rmmod nvme_rdma 00:07:57.208 rmmod nvme_fabrics 00:07:57.208 18:13:10 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:57.208 18:13:10 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:57.208 18:13:10 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:57.208 18:13:10 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 3304889 ']' 00:07:57.208 18:13:10 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 3304889 00:07:57.208 18:13:10 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 3304889 ']' 00:07:57.208 18:13:10 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 3304889 00:07:57.208 18:13:10 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:07:57.208 18:13:10 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:57.208 18:13:10 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3304889 00:07:57.468 18:13:10 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:57.468 18:13:10 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:57.468 18:13:10 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3304889' 00:07:57.468 killing process with pid 3304889 00:07:57.468 18:13:10 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 3304889 00:07:57.468 18:13:10 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 3304889 00:07:57.728 18:13:10 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:57.728 18:13:10 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:07:57.728 00:07:57.728 real 0m22.813s 00:07:57.728 user 1m13.175s 00:07:57.728 sys 0m6.662s 00:07:57.728 18:13:10 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:57.728 18:13:10 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:57.728 ************************************ 00:07:57.728 END TEST nvmf_lvol 00:07:57.728 ************************************ 00:07:57.728 18:13:10 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:07:57.728 18:13:10 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:57.728 18:13:10 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.728 18:13:10 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:57.728 ************************************ 00:07:57.728 START TEST nvmf_lvs_grow 00:07:57.728 ************************************ 00:07:57.728 18:13:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:07:57.988 * Looking for test storage... 00:07:57.988 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:57.988 18:13:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:57.988 18:13:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:07:57.988 18:13:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:57.988 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:57.988 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:57.988 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:57.988 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:57.988 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:57.988 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:57.988 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:57.988 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:57.988 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:57.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.989 --rc genhtml_branch_coverage=1 00:07:57.989 --rc genhtml_function_coverage=1 00:07:57.989 --rc genhtml_legend=1 00:07:57.989 --rc geninfo_all_blocks=1 00:07:57.989 --rc geninfo_unexecuted_blocks=1 00:07:57.989 00:07:57.989 ' 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:57.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.989 --rc genhtml_branch_coverage=1 00:07:57.989 --rc genhtml_function_coverage=1 00:07:57.989 --rc genhtml_legend=1 00:07:57.989 --rc geninfo_all_blocks=1 00:07:57.989 --rc geninfo_unexecuted_blocks=1 00:07:57.989 00:07:57.989 ' 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:57.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.989 --rc genhtml_branch_coverage=1 00:07:57.989 --rc genhtml_function_coverage=1 00:07:57.989 --rc genhtml_legend=1 00:07:57.989 --rc geninfo_all_blocks=1 00:07:57.989 --rc geninfo_unexecuted_blocks=1 00:07:57.989 00:07:57.989 ' 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:57.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.989 --rc genhtml_branch_coverage=1 00:07:57.989 --rc genhtml_function_coverage=1 00:07:57.989 --rc genhtml_legend=1 00:07:57.989 --rc geninfo_all_blocks=1 00:07:57.989 --rc geninfo_unexecuted_blocks=1 00:07:57.989 00:07:57.989 ' 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:57.989 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:57.989 18:13:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:04.566 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:04.566 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:04.566 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:04.566 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:04.566 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:04.566 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:04.566 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:04.566 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:04.566 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:04.566 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:04.566 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:04.566 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:04.566 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:04.566 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:04.566 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:04.566 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:04.566 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:04.566 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:04.566 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:04.566 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:04.566 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:04.566 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:04.566 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:04.566 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:04.566 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:04.566 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:04.566 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:04.566 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:04.566 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:04.566 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:04.566 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:04.566 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:04.566 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:04.566 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:04.566 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:04.567 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:04.567 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:04.567 Found net devices under 0000:18:00.0: mlx_0_0 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:04.567 Found net devices under 0000:18:00.1: mlx_0_1 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # rdma_device_init 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # uname 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:04.567 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:04.827 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:04.827 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:04.827 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:04.827 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:04.827 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:04.827 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@528 -- # allocate_nic_ips 00:08:04.827 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:04.827 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:04.827 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:04.827 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:04.827 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:04.827 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:04.827 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:04.827 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:04.827 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:04.827 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:04.827 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:04.827 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:08:04.827 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:04.827 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:04.827 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:04.827 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:04.827 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:04.827 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:04.827 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:08:04.827 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:04.828 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:04.828 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:08:04.828 altname enp24s0f0np0 00:08:04.828 altname ens785f0np0 00:08:04.828 inet 192.168.100.8/24 scope global mlx_0_0 00:08:04.828 valid_lft forever preferred_lft forever 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:04.828 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:04.828 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:08:04.828 altname enp24s0f1np1 00:08:04.828 altname ens785f1np1 00:08:04.828 inet 192.168.100.9/24 scope global mlx_0_1 00:08:04.828 valid_lft forever preferred_lft forever 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:08:04.828 192.168.100.9' 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:08:04.828 192.168.100.9' 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # head -n 1 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:08:04.828 192.168.100.9' 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # tail -n +2 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # head -n 1 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=3310019 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 3310019 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 3310019 ']' 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:04.828 18:13:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:05.088 [2024-10-08 18:13:18.033602] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:08:05.088 [2024-10-08 18:13:18.033663] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:05.088 [2024-10-08 18:13:18.121025] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.088 [2024-10-08 18:13:18.209873] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:05.088 [2024-10-08 18:13:18.209920] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:05.088 [2024-10-08 18:13:18.209929] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:05.088 [2024-10-08 18:13:18.209938] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:05.088 [2024-10-08 18:13:18.209957] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:05.088 [2024-10-08 18:13:18.210426] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.024 18:13:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:06.024 18:13:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:06.024 18:13:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:06.024 18:13:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:06.024 18:13:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:06.024 18:13:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:06.024 18:13:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:06.024 [2024-10-08 18:13:19.124341] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xcd70e0/0xcdb5d0) succeed. 00:08:06.024 [2024-10-08 18:13:19.133671] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xcd85e0/0xd1cc70) succeed. 00:08:06.284 18:13:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:06.284 18:13:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:06.284 18:13:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:06.284 18:13:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:06.284 ************************************ 00:08:06.284 START TEST lvs_grow_clean 00:08:06.284 ************************************ 00:08:06.284 18:13:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:06.284 18:13:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:06.284 18:13:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:06.284 18:13:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:06.284 18:13:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:06.284 18:13:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:06.284 18:13:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:06.284 18:13:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:06.284 18:13:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:06.284 18:13:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:06.544 18:13:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:06.544 18:13:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:06.544 18:13:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=91b146d3-c991-4bd4-845b-e3f2244c1e4d 00:08:06.544 18:13:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91b146d3-c991-4bd4-845b-e3f2244c1e4d 00:08:06.544 18:13:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:06.803 18:13:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:06.803 18:13:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:06.803 18:13:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 91b146d3-c991-4bd4-845b-e3f2244c1e4d lvol 150 00:08:07.062 18:13:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=6e8c1903-1d69-419f-be86-7a0d2073e05d 00:08:07.062 18:13:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:07.062 18:13:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:07.321 [2024-10-08 18:13:20.242077] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:07.321 [2024-10-08 18:13:20.242135] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:07.321 true 00:08:07.321 18:13:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91b146d3-c991-4bd4-845b-e3f2244c1e4d 00:08:07.321 18:13:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:07.321 18:13:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:07.321 18:13:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:07.580 18:13:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6e8c1903-1d69-419f-be86-7a0d2073e05d 00:08:07.838 18:13:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:08:08.098 [2024-10-08 18:13:21.060722] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:08.098 18:13:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:08.357 18:13:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3310438 00:08:08.357 18:13:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:08.357 18:13:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:08.357 18:13:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3310438 /var/tmp/bdevperf.sock 00:08:08.357 18:13:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 3310438 ']' 00:08:08.358 18:13:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:08.358 18:13:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:08.358 18:13:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:08.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:08.358 18:13:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:08.358 18:13:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:08.358 [2024-10-08 18:13:21.332455] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:08:08.358 [2024-10-08 18:13:21.332513] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3310438 ] 00:08:08.358 [2024-10-08 18:13:21.417568] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.358 [2024-10-08 18:13:21.497990] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.295 18:13:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:09.295 18:13:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:09.295 18:13:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:09.295 Nvme0n1 00:08:09.295 18:13:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:09.554 [ 00:08:09.554 { 00:08:09.554 "name": "Nvme0n1", 00:08:09.554 "aliases": [ 00:08:09.554 "6e8c1903-1d69-419f-be86-7a0d2073e05d" 00:08:09.554 ], 00:08:09.554 "product_name": "NVMe disk", 00:08:09.554 "block_size": 4096, 00:08:09.554 "num_blocks": 38912, 00:08:09.554 "uuid": "6e8c1903-1d69-419f-be86-7a0d2073e05d", 00:08:09.554 "numa_id": 0, 00:08:09.554 "assigned_rate_limits": { 00:08:09.554 "rw_ios_per_sec": 0, 00:08:09.554 "rw_mbytes_per_sec": 0, 00:08:09.554 "r_mbytes_per_sec": 0, 00:08:09.554 "w_mbytes_per_sec": 0 00:08:09.554 }, 00:08:09.554 "claimed": false, 00:08:09.554 "zoned": false, 00:08:09.554 "supported_io_types": { 00:08:09.554 "read": true, 00:08:09.554 "write": true, 00:08:09.554 "unmap": true, 00:08:09.554 "flush": true, 00:08:09.554 "reset": true, 00:08:09.554 "nvme_admin": true, 00:08:09.554 "nvme_io": true, 00:08:09.554 "nvme_io_md": false, 00:08:09.554 "write_zeroes": true, 00:08:09.554 "zcopy": false, 00:08:09.554 "get_zone_info": false, 00:08:09.554 "zone_management": false, 00:08:09.554 "zone_append": false, 00:08:09.554 "compare": true, 00:08:09.554 "compare_and_write": true, 00:08:09.554 "abort": true, 00:08:09.554 "seek_hole": false, 00:08:09.554 "seek_data": false, 00:08:09.554 "copy": true, 00:08:09.554 "nvme_iov_md": false 00:08:09.554 }, 00:08:09.554 "memory_domains": [ 00:08:09.554 { 00:08:09.554 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:08:09.554 "dma_device_type": 0 00:08:09.554 } 00:08:09.554 ], 00:08:09.554 "driver_specific": { 00:08:09.554 "nvme": [ 00:08:09.554 { 00:08:09.554 "trid": { 00:08:09.554 "trtype": "RDMA", 00:08:09.554 "adrfam": "IPv4", 00:08:09.554 "traddr": "192.168.100.8", 00:08:09.554 "trsvcid": "4420", 00:08:09.554 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:09.554 }, 00:08:09.554 "ctrlr_data": { 00:08:09.554 "cntlid": 1, 00:08:09.554 "vendor_id": "0x8086", 00:08:09.554 "model_number": "SPDK bdev Controller", 00:08:09.554 "serial_number": "SPDK0", 00:08:09.554 "firmware_revision": "25.01", 00:08:09.554 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:09.554 "oacs": { 00:08:09.554 "security": 0, 00:08:09.554 "format": 0, 00:08:09.554 "firmware": 0, 00:08:09.554 "ns_manage": 0 00:08:09.554 }, 00:08:09.554 "multi_ctrlr": true, 00:08:09.554 "ana_reporting": false 00:08:09.554 }, 00:08:09.554 "vs": { 00:08:09.554 "nvme_version": "1.3" 00:08:09.554 }, 00:08:09.554 "ns_data": { 00:08:09.554 "id": 1, 00:08:09.554 "can_share": true 00:08:09.554 } 00:08:09.554 } 00:08:09.554 ], 00:08:09.554 "mp_policy": "active_passive" 00:08:09.554 } 00:08:09.554 } 00:08:09.554 ] 00:08:09.554 18:13:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3310625 00:08:09.554 18:13:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:09.554 18:13:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:09.813 Running I/O for 10 seconds... 00:08:10.752 Latency(us) 00:08:10.752 [2024-10-08T16:13:23.925Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:10.752 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.752 Nvme0n1 : 1.00 33536.00 131.00 0.00 0.00 0.00 0.00 0.00 00:08:10.752 [2024-10-08T16:13:23.925Z] =================================================================================================================== 00:08:10.752 [2024-10-08T16:13:23.925Z] Total : 33536.00 131.00 0.00 0.00 0.00 0.00 0.00 00:08:10.752 00:08:11.690 18:13:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 91b146d3-c991-4bd4-845b-e3f2244c1e4d 00:08:11.690 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:11.690 Nvme0n1 : 2.00 33856.50 132.25 0.00 0.00 0.00 0.00 0.00 00:08:11.690 [2024-10-08T16:13:24.863Z] =================================================================================================================== 00:08:11.690 [2024-10-08T16:13:24.863Z] Total : 33856.50 132.25 0.00 0.00 0.00 0.00 0.00 00:08:11.690 00:08:11.690 true 00:08:11.950 18:13:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91b146d3-c991-4bd4-845b-e3f2244c1e4d 00:08:11.950 18:13:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:11.950 18:13:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:11.950 18:13:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:11.950 18:13:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3310625 00:08:12.888 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.888 Nvme0n1 : 3.00 34026.67 132.92 0.00 0.00 0.00 0.00 0.00 00:08:12.888 [2024-10-08T16:13:26.061Z] =================================================================================================================== 00:08:12.888 [2024-10-08T16:13:26.061Z] Total : 34026.67 132.92 0.00 0.00 0.00 0.00 0.00 00:08:12.888 00:08:13.828 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.828 Nvme0n1 : 4.00 34191.50 133.56 0.00 0.00 0.00 0.00 0.00 00:08:13.828 [2024-10-08T16:13:27.001Z] =================================================================================================================== 00:08:13.828 [2024-10-08T16:13:27.001Z] Total : 34191.50 133.56 0.00 0.00 0.00 0.00 0.00 00:08:13.828 00:08:14.766 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.766 Nvme0n1 : 5.00 34290.60 133.95 0.00 0.00 0.00 0.00 0.00 00:08:14.767 [2024-10-08T16:13:27.940Z] =================================================================================================================== 00:08:14.767 [2024-10-08T16:13:27.940Z] Total : 34290.60 133.95 0.00 0.00 0.00 0.00 0.00 00:08:14.767 00:08:15.714 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.714 Nvme0n1 : 6.00 34372.67 134.27 0.00 0.00 0.00 0.00 0.00 00:08:15.714 [2024-10-08T16:13:28.887Z] =================================================================================================================== 00:08:15.714 [2024-10-08T16:13:28.887Z] Total : 34372.67 134.27 0.00 0.00 0.00 0.00 0.00 00:08:15.714 00:08:16.724 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.724 Nvme0n1 : 7.00 34414.29 134.43 0.00 0.00 0.00 0.00 0.00 00:08:16.724 [2024-10-08T16:13:29.897Z] =================================================================================================================== 00:08:16.724 [2024-10-08T16:13:29.897Z] Total : 34414.29 134.43 0.00 0.00 0.00 0.00 0.00 00:08:16.724 00:08:17.663 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.663 Nvme0n1 : 8.00 34363.50 134.23 0.00 0.00 0.00 0.00 0.00 00:08:17.663 [2024-10-08T16:13:30.836Z] =================================================================================================================== 00:08:17.663 [2024-10-08T16:13:30.836Z] Total : 34363.50 134.23 0.00 0.00 0.00 0.00 0.00 00:08:17.663 00:08:18.601 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.601 Nvme0n1 : 9.00 34403.11 134.39 0.00 0.00 0.00 0.00 0.00 00:08:18.601 [2024-10-08T16:13:31.774Z] =================================================================================================================== 00:08:18.601 [2024-10-08T16:13:31.774Z] Total : 34403.11 134.39 0.00 0.00 0.00 0.00 0.00 00:08:18.601 00:08:19.982 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.982 Nvme0n1 : 10.00 34444.90 134.55 0.00 0.00 0.00 0.00 0.00 00:08:19.982 [2024-10-08T16:13:33.155Z] =================================================================================================================== 00:08:19.982 [2024-10-08T16:13:33.155Z] Total : 34444.90 134.55 0.00 0.00 0.00 0.00 0.00 00:08:19.982 00:08:19.982 00:08:19.982 Latency(us) 00:08:19.982 [2024-10-08T16:13:33.155Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.982 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.982 Nvme0n1 : 10.00 34446.20 134.56 0.00 0.00 3713.44 2749.66 14019.01 00:08:19.982 [2024-10-08T16:13:33.155Z] =================================================================================================================== 00:08:19.982 [2024-10-08T16:13:33.155Z] Total : 34446.20 134.56 0.00 0.00 3713.44 2749.66 14019.01 00:08:19.982 { 00:08:19.982 "results": [ 00:08:19.982 { 00:08:19.982 "job": "Nvme0n1", 00:08:19.982 "core_mask": "0x2", 00:08:19.982 "workload": "randwrite", 00:08:19.982 "status": "finished", 00:08:19.982 "queue_depth": 128, 00:08:19.982 "io_size": 4096, 00:08:19.982 "runtime": 10.003338, 00:08:19.982 "iops": 34446.20185781986, 00:08:19.982 "mibps": 134.55547600710884, 00:08:19.982 "io_failed": 0, 00:08:19.982 "io_timeout": 0, 00:08:19.982 "avg_latency_us": 3713.439528737881, 00:08:19.982 "min_latency_us": 2749.662608695652, 00:08:19.982 "max_latency_us": 14019.005217391305 00:08:19.982 } 00:08:19.982 ], 00:08:19.982 "core_count": 1 00:08:19.982 } 00:08:19.982 18:13:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3310438 00:08:19.982 18:13:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 3310438 ']' 00:08:19.982 18:13:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 3310438 00:08:19.982 18:13:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:19.982 18:13:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:19.982 18:13:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3310438 00:08:19.982 18:13:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:19.982 18:13:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:19.982 18:13:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3310438' 00:08:19.982 killing process with pid 3310438 00:08:19.982 18:13:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 3310438 00:08:19.982 Received shutdown signal, test time was about 10.000000 seconds 00:08:19.982 00:08:19.982 Latency(us) 00:08:19.982 [2024-10-08T16:13:33.155Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.982 [2024-10-08T16:13:33.155Z] =================================================================================================================== 00:08:19.982 [2024-10-08T16:13:33.155Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:19.982 18:13:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 3310438 00:08:19.983 18:13:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:20.242 18:13:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:20.502 18:13:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91b146d3-c991-4bd4-845b-e3f2244c1e4d 00:08:20.502 18:13:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:20.761 18:13:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:20.761 18:13:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:20.761 18:13:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:20.761 [2024-10-08 18:13:33.896018] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:21.021 18:13:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91b146d3-c991-4bd4-845b-e3f2244c1e4d 00:08:21.021 18:13:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:21.021 18:13:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91b146d3-c991-4bd4-845b-e3f2244c1e4d 00:08:21.021 18:13:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:21.021 18:13:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:21.021 18:13:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:21.021 18:13:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:21.021 18:13:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:21.021 18:13:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:21.021 18:13:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:21.021 18:13:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:08:21.021 18:13:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91b146d3-c991-4bd4-845b-e3f2244c1e4d 00:08:21.021 request: 00:08:21.021 { 00:08:21.021 "uuid": "91b146d3-c991-4bd4-845b-e3f2244c1e4d", 00:08:21.021 "method": "bdev_lvol_get_lvstores", 00:08:21.021 "req_id": 1 00:08:21.021 } 00:08:21.021 Got JSON-RPC error response 00:08:21.021 response: 00:08:21.021 { 00:08:21.021 "code": -19, 00:08:21.021 "message": "No such device" 00:08:21.021 } 00:08:21.021 18:13:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:21.021 18:13:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:21.021 18:13:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:21.021 18:13:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:21.021 18:13:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:21.281 aio_bdev 00:08:21.281 18:13:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6e8c1903-1d69-419f-be86-7a0d2073e05d 00:08:21.281 18:13:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=6e8c1903-1d69-419f-be86-7a0d2073e05d 00:08:21.281 18:13:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:21.281 18:13:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:21.281 18:13:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:21.281 18:13:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:21.281 18:13:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:21.542 18:13:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6e8c1903-1d69-419f-be86-7a0d2073e05d -t 2000 00:08:21.542 [ 00:08:21.542 { 00:08:21.542 "name": "6e8c1903-1d69-419f-be86-7a0d2073e05d", 00:08:21.542 "aliases": [ 00:08:21.542 "lvs/lvol" 00:08:21.542 ], 00:08:21.542 "product_name": "Logical Volume", 00:08:21.542 "block_size": 4096, 00:08:21.542 "num_blocks": 38912, 00:08:21.542 "uuid": "6e8c1903-1d69-419f-be86-7a0d2073e05d", 00:08:21.542 "assigned_rate_limits": { 00:08:21.542 "rw_ios_per_sec": 0, 00:08:21.542 "rw_mbytes_per_sec": 0, 00:08:21.542 "r_mbytes_per_sec": 0, 00:08:21.542 "w_mbytes_per_sec": 0 00:08:21.542 }, 00:08:21.542 "claimed": false, 00:08:21.542 "zoned": false, 00:08:21.542 "supported_io_types": { 00:08:21.542 "read": true, 00:08:21.542 "write": true, 00:08:21.542 "unmap": true, 00:08:21.542 "flush": false, 00:08:21.542 "reset": true, 00:08:21.542 "nvme_admin": false, 00:08:21.542 "nvme_io": false, 00:08:21.542 "nvme_io_md": false, 00:08:21.542 "write_zeroes": true, 00:08:21.542 "zcopy": false, 00:08:21.542 "get_zone_info": false, 00:08:21.542 "zone_management": false, 00:08:21.542 "zone_append": false, 00:08:21.542 "compare": false, 00:08:21.542 "compare_and_write": false, 00:08:21.542 "abort": false, 00:08:21.542 "seek_hole": true, 00:08:21.542 "seek_data": true, 00:08:21.542 "copy": false, 00:08:21.542 "nvme_iov_md": false 00:08:21.542 }, 00:08:21.542 "driver_specific": { 00:08:21.542 "lvol": { 00:08:21.542 "lvol_store_uuid": "91b146d3-c991-4bd4-845b-e3f2244c1e4d", 00:08:21.542 "base_bdev": "aio_bdev", 00:08:21.542 "thin_provision": false, 00:08:21.542 "num_allocated_clusters": 38, 00:08:21.542 "snapshot": false, 00:08:21.542 "clone": false, 00:08:21.542 "esnap_clone": false 00:08:21.542 } 00:08:21.542 } 00:08:21.542 } 00:08:21.542 ] 00:08:21.801 18:13:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:21.801 18:13:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91b146d3-c991-4bd4-845b-e3f2244c1e4d 00:08:21.801 18:13:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:21.801 18:13:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:21.801 18:13:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91b146d3-c991-4bd4-845b-e3f2244c1e4d 00:08:21.801 18:13:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:22.060 18:13:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:22.060 18:13:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6e8c1903-1d69-419f-be86-7a0d2073e05d 00:08:22.320 18:13:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 91b146d3-c991-4bd4-845b-e3f2244c1e4d 00:08:22.579 18:13:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:22.579 18:13:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:22.579 00:08:22.579 real 0m16.498s 00:08:22.579 user 0m16.388s 00:08:22.579 sys 0m1.363s 00:08:22.579 18:13:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.579 18:13:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:22.579 ************************************ 00:08:22.579 END TEST lvs_grow_clean 00:08:22.579 ************************************ 00:08:22.838 18:13:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:22.838 18:13:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:22.838 18:13:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.839 18:13:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:22.839 ************************************ 00:08:22.839 START TEST lvs_grow_dirty 00:08:22.839 ************************************ 00:08:22.839 18:13:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:22.839 18:13:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:22.839 18:13:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:22.839 18:13:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:22.839 18:13:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:22.839 18:13:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:22.839 18:13:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:22.839 18:13:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:22.839 18:13:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:22.839 18:13:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:23.098 18:13:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:23.098 18:13:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:23.098 18:13:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=32163456-168d-4fe6-b344-e59361ba2887 00:08:23.098 18:13:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32163456-168d-4fe6-b344-e59361ba2887 00:08:23.098 18:13:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:23.357 18:13:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:23.357 18:13:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:23.357 18:13:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 32163456-168d-4fe6-b344-e59361ba2887 lvol 150 00:08:23.616 18:13:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e207f8f7-eb77-4800-b682-a422d745e19b 00:08:23.616 18:13:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:23.616 18:13:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:23.876 [2024-10-08 18:13:36.823070] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:23.876 [2024-10-08 18:13:36.823131] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:23.876 true 00:08:23.876 18:13:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32163456-168d-4fe6-b344-e59361ba2887 00:08:23.876 18:13:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:23.876 18:13:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:23.876 18:13:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:24.135 18:13:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e207f8f7-eb77-4800-b682-a422d745e19b 00:08:24.394 18:13:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:08:24.653 [2024-10-08 18:13:37.585649] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:24.653 18:13:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:24.653 18:13:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:24.653 18:13:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3312697 00:08:24.653 18:13:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:24.653 18:13:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3312697 /var/tmp/bdevperf.sock 00:08:24.653 18:13:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3312697 ']' 00:08:24.653 18:13:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:24.653 18:13:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:24.653 18:13:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:24.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:24.653 18:13:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:24.653 18:13:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:24.653 [2024-10-08 18:13:37.821878] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:08:24.653 [2024-10-08 18:13:37.821936] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3312697 ] 00:08:24.913 [2024-10-08 18:13:37.904093] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.913 [2024-10-08 18:13:37.984869] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.851 18:13:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:25.851 18:13:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:25.851 18:13:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:25.851 Nvme0n1 00:08:25.851 18:13:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:26.110 [ 00:08:26.110 { 00:08:26.110 "name": "Nvme0n1", 00:08:26.110 "aliases": [ 00:08:26.110 "e207f8f7-eb77-4800-b682-a422d745e19b" 00:08:26.110 ], 00:08:26.110 "product_name": "NVMe disk", 00:08:26.110 "block_size": 4096, 00:08:26.110 "num_blocks": 38912, 00:08:26.110 "uuid": "e207f8f7-eb77-4800-b682-a422d745e19b", 00:08:26.110 "numa_id": 0, 00:08:26.110 "assigned_rate_limits": { 00:08:26.110 "rw_ios_per_sec": 0, 00:08:26.110 "rw_mbytes_per_sec": 0, 00:08:26.110 "r_mbytes_per_sec": 0, 00:08:26.110 "w_mbytes_per_sec": 0 00:08:26.110 }, 00:08:26.110 "claimed": false, 00:08:26.110 "zoned": false, 00:08:26.110 "supported_io_types": { 00:08:26.110 "read": true, 00:08:26.110 "write": true, 00:08:26.110 "unmap": true, 00:08:26.110 "flush": true, 00:08:26.110 "reset": true, 00:08:26.110 "nvme_admin": true, 00:08:26.110 "nvme_io": true, 00:08:26.110 "nvme_io_md": false, 00:08:26.110 "write_zeroes": true, 00:08:26.110 "zcopy": false, 00:08:26.110 "get_zone_info": false, 00:08:26.110 "zone_management": false, 00:08:26.110 "zone_append": false, 00:08:26.111 "compare": true, 00:08:26.111 "compare_and_write": true, 00:08:26.111 "abort": true, 00:08:26.111 "seek_hole": false, 00:08:26.111 "seek_data": false, 00:08:26.111 "copy": true, 00:08:26.111 "nvme_iov_md": false 00:08:26.111 }, 00:08:26.111 "memory_domains": [ 00:08:26.111 { 00:08:26.111 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:08:26.111 "dma_device_type": 0 00:08:26.111 } 00:08:26.111 ], 00:08:26.111 "driver_specific": { 00:08:26.111 "nvme": [ 00:08:26.111 { 00:08:26.111 "trid": { 00:08:26.111 "trtype": "RDMA", 00:08:26.111 "adrfam": "IPv4", 00:08:26.111 "traddr": "192.168.100.8", 00:08:26.111 "trsvcid": "4420", 00:08:26.111 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:26.111 }, 00:08:26.111 "ctrlr_data": { 00:08:26.111 "cntlid": 1, 00:08:26.111 "vendor_id": "0x8086", 00:08:26.111 "model_number": "SPDK bdev Controller", 00:08:26.111 "serial_number": "SPDK0", 00:08:26.111 "firmware_revision": "25.01", 00:08:26.111 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:26.111 "oacs": { 00:08:26.111 "security": 0, 00:08:26.111 "format": 0, 00:08:26.111 "firmware": 0, 00:08:26.111 "ns_manage": 0 00:08:26.111 }, 00:08:26.111 "multi_ctrlr": true, 00:08:26.111 "ana_reporting": false 00:08:26.111 }, 00:08:26.111 "vs": { 00:08:26.111 "nvme_version": "1.3" 00:08:26.111 }, 00:08:26.111 "ns_data": { 00:08:26.111 "id": 1, 00:08:26.111 "can_share": true 00:08:26.111 } 00:08:26.111 } 00:08:26.111 ], 00:08:26.111 "mp_policy": "active_passive" 00:08:26.111 } 00:08:26.111 } 00:08:26.111 ] 00:08:26.111 18:13:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3312883 00:08:26.111 18:13:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:26.111 18:13:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:26.111 Running I/O for 10 seconds... 00:08:27.491 Latency(us) 00:08:27.491 [2024-10-08T16:13:40.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:27.491 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.491 Nvme0n1 : 1.00 33376.00 130.38 0.00 0.00 0.00 0.00 0.00 00:08:27.491 [2024-10-08T16:13:40.664Z] =================================================================================================================== 00:08:27.491 [2024-10-08T16:13:40.664Z] Total : 33376.00 130.38 0.00 0.00 0.00 0.00 0.00 00:08:27.491 00:08:28.060 18:13:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 32163456-168d-4fe6-b344-e59361ba2887 00:08:28.319 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.319 Nvme0n1 : 2.00 33584.00 131.19 0.00 0.00 0.00 0.00 0.00 00:08:28.319 [2024-10-08T16:13:41.492Z] =================================================================================================================== 00:08:28.319 [2024-10-08T16:13:41.492Z] Total : 33584.00 131.19 0.00 0.00 0.00 0.00 0.00 00:08:28.319 00:08:28.319 true 00:08:28.319 18:13:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32163456-168d-4fe6-b344-e59361ba2887 00:08:28.319 18:13:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:28.579 18:13:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:28.579 18:13:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:28.579 18:13:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3312883 00:08:29.148 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.148 Nvme0n1 : 3.00 33802.67 132.04 0.00 0.00 0.00 0.00 0.00 00:08:29.148 [2024-10-08T16:13:42.321Z] =================================================================================================================== 00:08:29.148 [2024-10-08T16:13:42.321Z] Total : 33802.67 132.04 0.00 0.00 0.00 0.00 0.00 00:08:29.148 00:08:30.085 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.085 Nvme0n1 : 4.00 34009.00 132.85 0.00 0.00 0.00 0.00 0.00 00:08:30.085 [2024-10-08T16:13:43.258Z] =================================================================================================================== 00:08:30.085 [2024-10-08T16:13:43.258Z] Total : 34009.00 132.85 0.00 0.00 0.00 0.00 0.00 00:08:30.085 00:08:31.467 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.467 Nvme0n1 : 5.00 34150.20 133.40 0.00 0.00 0.00 0.00 0.00 00:08:31.467 [2024-10-08T16:13:44.640Z] =================================================================================================================== 00:08:31.467 [2024-10-08T16:13:44.640Z] Total : 34150.20 133.40 0.00 0.00 0.00 0.00 0.00 00:08:31.467 00:08:32.407 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.407 Nvme0n1 : 6.00 34234.83 133.73 0.00 0.00 0.00 0.00 0.00 00:08:32.407 [2024-10-08T16:13:45.580Z] =================================================================================================================== 00:08:32.407 [2024-10-08T16:13:45.580Z] Total : 34234.83 133.73 0.00 0.00 0.00 0.00 0.00 00:08:32.407 00:08:33.347 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.347 Nvme0n1 : 7.00 34308.71 134.02 0.00 0.00 0.00 0.00 0.00 00:08:33.347 [2024-10-08T16:13:46.520Z] =================================================================================================================== 00:08:33.347 [2024-10-08T16:13:46.520Z] Total : 34308.71 134.02 0.00 0.00 0.00 0.00 0.00 00:08:33.347 00:08:34.286 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.286 Nvme0n1 : 8.00 34347.25 134.17 0.00 0.00 0.00 0.00 0.00 00:08:34.286 [2024-10-08T16:13:47.459Z] =================================================================================================================== 00:08:34.286 [2024-10-08T16:13:47.459Z] Total : 34347.25 134.17 0.00 0.00 0.00 0.00 0.00 00:08:34.286 00:08:35.225 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.225 Nvme0n1 : 9.00 34395.78 134.36 0.00 0.00 0.00 0.00 0.00 00:08:35.225 [2024-10-08T16:13:48.398Z] =================================================================================================================== 00:08:35.225 [2024-10-08T16:13:48.398Z] Total : 34395.78 134.36 0.00 0.00 0.00 0.00 0.00 00:08:35.225 00:08:36.162 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.162 Nvme0n1 : 10.00 34415.30 134.43 0.00 0.00 0.00 0.00 0.00 00:08:36.162 [2024-10-08T16:13:49.335Z] =================================================================================================================== 00:08:36.162 [2024-10-08T16:13:49.335Z] Total : 34415.30 134.43 0.00 0.00 0.00 0.00 0.00 00:08:36.162 00:08:36.162 00:08:36.162 Latency(us) 00:08:36.162 [2024-10-08T16:13:49.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:36.162 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.162 Nvme0n1 : 10.00 34414.89 134.43 0.00 0.00 3716.80 2194.03 10143.83 00:08:36.162 [2024-10-08T16:13:49.335Z] =================================================================================================================== 00:08:36.162 [2024-10-08T16:13:49.335Z] Total : 34414.89 134.43 0.00 0.00 3716.80 2194.03 10143.83 00:08:36.162 { 00:08:36.162 "results": [ 00:08:36.162 { 00:08:36.162 "job": "Nvme0n1", 00:08:36.162 "core_mask": "0x2", 00:08:36.162 "workload": "randwrite", 00:08:36.162 "status": "finished", 00:08:36.162 "queue_depth": 128, 00:08:36.162 "io_size": 4096, 00:08:36.162 "runtime": 10.003285, 00:08:36.162 "iops": 34414.89470708872, 00:08:36.162 "mibps": 134.43318244956532, 00:08:36.162 "io_failed": 0, 00:08:36.162 "io_timeout": 0, 00:08:36.162 "avg_latency_us": 3716.801345385832, 00:08:36.162 "min_latency_us": 2194.031304347826, 00:08:36.162 "max_latency_us": 10143.83304347826 00:08:36.162 } 00:08:36.162 ], 00:08:36.162 "core_count": 1 00:08:36.162 } 00:08:36.162 18:13:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3312697 00:08:36.162 18:13:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 3312697 ']' 00:08:36.162 18:13:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 3312697 00:08:36.162 18:13:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:36.162 18:13:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:36.162 18:13:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3312697 00:08:36.422 18:13:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:36.422 18:13:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:36.422 18:13:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3312697' 00:08:36.422 killing process with pid 3312697 00:08:36.422 18:13:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 3312697 00:08:36.422 Received shutdown signal, test time was about 10.000000 seconds 00:08:36.422 00:08:36.422 Latency(us) 00:08:36.422 [2024-10-08T16:13:49.595Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:36.422 [2024-10-08T16:13:49.595Z] =================================================================================================================== 00:08:36.422 [2024-10-08T16:13:49.595Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:36.422 18:13:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 3312697 00:08:36.422 18:13:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:36.681 18:13:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:36.940 18:13:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:36.940 18:13:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32163456-168d-4fe6-b344-e59361ba2887 00:08:37.199 18:13:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:37.199 18:13:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:37.199 18:13:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3310019 00:08:37.199 18:13:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3310019 00:08:37.199 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3310019 Killed "${NVMF_APP[@]}" "$@" 00:08:37.199 18:13:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:37.199 18:13:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:37.199 18:13:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:37.199 18:13:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:37.199 18:13:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:37.199 18:13:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=3314351 00:08:37.199 18:13:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 3314351 00:08:37.199 18:13:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:37.199 18:13:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3314351 ']' 00:08:37.199 18:13:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.199 18:13:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:37.199 18:13:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.199 18:13:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:37.199 18:13:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:37.199 [2024-10-08 18:13:50.304325] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:08:37.199 [2024-10-08 18:13:50.304385] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.459 [2024-10-08 18:13:50.393877] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.459 [2024-10-08 18:13:50.483950] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:37.459 [2024-10-08 18:13:50.483992] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:37.459 [2024-10-08 18:13:50.484006] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:37.459 [2024-10-08 18:13:50.484015] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:37.459 [2024-10-08 18:13:50.484022] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:37.459 [2024-10-08 18:13:50.484504] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.029 18:13:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:38.029 18:13:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:38.029 18:13:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:38.029 18:13:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:38.029 18:13:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:38.029 18:13:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:38.029 18:13:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:38.290 [2024-10-08 18:13:51.379698] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:38.290 [2024-10-08 18:13:51.379784] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:38.290 [2024-10-08 18:13:51.379809] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:38.290 18:13:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:38.290 18:13:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e207f8f7-eb77-4800-b682-a422d745e19b 00:08:38.290 18:13:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=e207f8f7-eb77-4800-b682-a422d745e19b 00:08:38.290 18:13:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:38.290 18:13:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:38.290 18:13:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:38.290 18:13:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:38.290 18:13:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:38.550 18:13:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e207f8f7-eb77-4800-b682-a422d745e19b -t 2000 00:08:38.810 [ 00:08:38.810 { 00:08:38.810 "name": "e207f8f7-eb77-4800-b682-a422d745e19b", 00:08:38.810 "aliases": [ 00:08:38.810 "lvs/lvol" 00:08:38.810 ], 00:08:38.810 "product_name": "Logical Volume", 00:08:38.810 "block_size": 4096, 00:08:38.810 "num_blocks": 38912, 00:08:38.810 "uuid": "e207f8f7-eb77-4800-b682-a422d745e19b", 00:08:38.810 "assigned_rate_limits": { 00:08:38.810 "rw_ios_per_sec": 0, 00:08:38.810 "rw_mbytes_per_sec": 0, 00:08:38.810 "r_mbytes_per_sec": 0, 00:08:38.810 "w_mbytes_per_sec": 0 00:08:38.810 }, 00:08:38.810 "claimed": false, 00:08:38.810 "zoned": false, 00:08:38.810 "supported_io_types": { 00:08:38.810 "read": true, 00:08:38.810 "write": true, 00:08:38.810 "unmap": true, 00:08:38.810 "flush": false, 00:08:38.810 "reset": true, 00:08:38.810 "nvme_admin": false, 00:08:38.810 "nvme_io": false, 00:08:38.810 "nvme_io_md": false, 00:08:38.810 "write_zeroes": true, 00:08:38.810 "zcopy": false, 00:08:38.810 "get_zone_info": false, 00:08:38.810 "zone_management": false, 00:08:38.810 "zone_append": false, 00:08:38.810 "compare": false, 00:08:38.810 "compare_and_write": false, 00:08:38.810 "abort": false, 00:08:38.810 "seek_hole": true, 00:08:38.810 "seek_data": true, 00:08:38.810 "copy": false, 00:08:38.810 "nvme_iov_md": false 00:08:38.810 }, 00:08:38.810 "driver_specific": { 00:08:38.810 "lvol": { 00:08:38.810 "lvol_store_uuid": "32163456-168d-4fe6-b344-e59361ba2887", 00:08:38.810 "base_bdev": "aio_bdev", 00:08:38.810 "thin_provision": false, 00:08:38.810 "num_allocated_clusters": 38, 00:08:38.810 "snapshot": false, 00:08:38.810 "clone": false, 00:08:38.810 "esnap_clone": false 00:08:38.810 } 00:08:38.810 } 00:08:38.810 } 00:08:38.810 ] 00:08:38.810 18:13:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:38.810 18:13:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32163456-168d-4fe6-b344-e59361ba2887 00:08:38.810 18:13:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:39.069 18:13:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:39.069 18:13:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32163456-168d-4fe6-b344-e59361ba2887 00:08:39.069 18:13:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:39.070 18:13:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:39.070 18:13:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:39.329 [2024-10-08 18:13:52.356452] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:39.329 18:13:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32163456-168d-4fe6-b344-e59361ba2887 00:08:39.329 18:13:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:39.329 18:13:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32163456-168d-4fe6-b344-e59361ba2887 00:08:39.329 18:13:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:39.329 18:13:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:39.329 18:13:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:39.329 18:13:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:39.329 18:13:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:39.329 18:13:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:39.329 18:13:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:39.329 18:13:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:08:39.329 18:13:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32163456-168d-4fe6-b344-e59361ba2887 00:08:39.589 request: 00:08:39.589 { 00:08:39.589 "uuid": "32163456-168d-4fe6-b344-e59361ba2887", 00:08:39.589 "method": "bdev_lvol_get_lvstores", 00:08:39.589 "req_id": 1 00:08:39.589 } 00:08:39.589 Got JSON-RPC error response 00:08:39.589 response: 00:08:39.589 { 00:08:39.589 "code": -19, 00:08:39.589 "message": "No such device" 00:08:39.589 } 00:08:39.589 18:13:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:39.589 18:13:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:39.589 18:13:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:39.589 18:13:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:39.589 18:13:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:39.849 aio_bdev 00:08:39.849 18:13:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e207f8f7-eb77-4800-b682-a422d745e19b 00:08:39.849 18:13:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=e207f8f7-eb77-4800-b682-a422d745e19b 00:08:39.850 18:13:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:39.850 18:13:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:39.850 18:13:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:39.850 18:13:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:39.850 18:13:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:39.850 18:13:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e207f8f7-eb77-4800-b682-a422d745e19b -t 2000 00:08:40.110 [ 00:08:40.110 { 00:08:40.110 "name": "e207f8f7-eb77-4800-b682-a422d745e19b", 00:08:40.110 "aliases": [ 00:08:40.110 "lvs/lvol" 00:08:40.110 ], 00:08:40.110 "product_name": "Logical Volume", 00:08:40.110 "block_size": 4096, 00:08:40.110 "num_blocks": 38912, 00:08:40.110 "uuid": "e207f8f7-eb77-4800-b682-a422d745e19b", 00:08:40.110 "assigned_rate_limits": { 00:08:40.110 "rw_ios_per_sec": 0, 00:08:40.110 "rw_mbytes_per_sec": 0, 00:08:40.110 "r_mbytes_per_sec": 0, 00:08:40.110 "w_mbytes_per_sec": 0 00:08:40.110 }, 00:08:40.110 "claimed": false, 00:08:40.110 "zoned": false, 00:08:40.110 "supported_io_types": { 00:08:40.110 "read": true, 00:08:40.110 "write": true, 00:08:40.110 "unmap": true, 00:08:40.110 "flush": false, 00:08:40.110 "reset": true, 00:08:40.110 "nvme_admin": false, 00:08:40.110 "nvme_io": false, 00:08:40.110 "nvme_io_md": false, 00:08:40.110 "write_zeroes": true, 00:08:40.110 "zcopy": false, 00:08:40.110 "get_zone_info": false, 00:08:40.110 "zone_management": false, 00:08:40.110 "zone_append": false, 00:08:40.110 "compare": false, 00:08:40.110 "compare_and_write": false, 00:08:40.110 "abort": false, 00:08:40.110 "seek_hole": true, 00:08:40.110 "seek_data": true, 00:08:40.110 "copy": false, 00:08:40.110 "nvme_iov_md": false 00:08:40.110 }, 00:08:40.110 "driver_specific": { 00:08:40.110 "lvol": { 00:08:40.110 "lvol_store_uuid": "32163456-168d-4fe6-b344-e59361ba2887", 00:08:40.110 "base_bdev": "aio_bdev", 00:08:40.110 "thin_provision": false, 00:08:40.110 "num_allocated_clusters": 38, 00:08:40.110 "snapshot": false, 00:08:40.110 "clone": false, 00:08:40.110 "esnap_clone": false 00:08:40.110 } 00:08:40.110 } 00:08:40.110 } 00:08:40.111 ] 00:08:40.111 18:13:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:40.111 18:13:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32163456-168d-4fe6-b344-e59361ba2887 00:08:40.111 18:13:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:40.371 18:13:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:40.371 18:13:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32163456-168d-4fe6-b344-e59361ba2887 00:08:40.371 18:13:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:40.631 18:13:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:40.631 18:13:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e207f8f7-eb77-4800-b682-a422d745e19b 00:08:40.631 18:13:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 32163456-168d-4fe6-b344-e59361ba2887 00:08:40.891 18:13:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:41.150 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:41.150 00:08:41.150 real 0m18.364s 00:08:41.150 user 0m47.516s 00:08:41.150 sys 0m3.513s 00:08:41.150 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:41.150 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:41.150 ************************************ 00:08:41.150 END TEST lvs_grow_dirty 00:08:41.150 ************************************ 00:08:41.150 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:41.150 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:08:41.150 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:08:41.150 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:08:41.150 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:41.150 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:08:41.150 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:08:41.150 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:08:41.150 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:41.150 nvmf_trace.0 00:08:41.150 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:08:41.150 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:41.150 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:41.150 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:41.150 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:41.150 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:41.150 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:41.150 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:41.150 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:41.150 rmmod nvme_rdma 00:08:41.410 rmmod nvme_fabrics 00:08:41.410 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:41.410 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:41.410 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:41.410 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 3314351 ']' 00:08:41.410 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 3314351 00:08:41.410 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 3314351 ']' 00:08:41.410 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 3314351 00:08:41.410 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:08:41.410 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:41.410 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3314351 00:08:41.410 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:41.410 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:41.410 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3314351' 00:08:41.410 killing process with pid 3314351 00:08:41.410 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 3314351 00:08:41.410 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 3314351 00:08:41.714 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:41.714 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:08:41.714 00:08:41.714 real 0m43.791s 00:08:41.714 user 1m10.702s 00:08:41.714 sys 0m10.730s 00:08:41.714 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:41.714 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:41.714 ************************************ 00:08:41.714 END TEST nvmf_lvs_grow 00:08:41.714 ************************************ 00:08:41.714 18:13:54 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:08:41.714 18:13:54 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:41.714 18:13:54 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:41.714 18:13:54 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:41.714 ************************************ 00:08:41.714 START TEST nvmf_bdev_io_wait 00:08:41.714 ************************************ 00:08:41.714 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:08:41.714 * Looking for test storage... 00:08:41.714 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:41.714 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:41.714 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:08:41.714 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:42.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.006 --rc genhtml_branch_coverage=1 00:08:42.006 --rc genhtml_function_coverage=1 00:08:42.006 --rc genhtml_legend=1 00:08:42.006 --rc geninfo_all_blocks=1 00:08:42.006 --rc geninfo_unexecuted_blocks=1 00:08:42.006 00:08:42.006 ' 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:42.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.006 --rc genhtml_branch_coverage=1 00:08:42.006 --rc genhtml_function_coverage=1 00:08:42.006 --rc genhtml_legend=1 00:08:42.006 --rc geninfo_all_blocks=1 00:08:42.006 --rc geninfo_unexecuted_blocks=1 00:08:42.006 00:08:42.006 ' 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:42.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.006 --rc genhtml_branch_coverage=1 00:08:42.006 --rc genhtml_function_coverage=1 00:08:42.006 --rc genhtml_legend=1 00:08:42.006 --rc geninfo_all_blocks=1 00:08:42.006 --rc geninfo_unexecuted_blocks=1 00:08:42.006 00:08:42.006 ' 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:42.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.006 --rc genhtml_branch_coverage=1 00:08:42.006 --rc genhtml_function_coverage=1 00:08:42.006 --rc genhtml_legend=1 00:08:42.006 --rc geninfo_all_blocks=1 00:08:42.006 --rc geninfo_unexecuted_blocks=1 00:08:42.006 00:08:42.006 ' 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.006 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.007 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.007 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.007 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:42.007 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.007 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:42.007 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:42.007 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:42.007 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.007 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.007 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.007 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:42.007 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:42.007 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:42.007 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:42.007 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:42.007 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:42.007 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:42.007 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:42.007 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:08:42.007 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:42.007 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:42.007 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:42.007 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:42.007 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.007 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:42.007 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.007 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:42.007 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:42.007 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:42.007 18:13:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:48.584 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:48.584 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:48.584 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:48.584 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:48.584 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:48.584 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:48.584 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:48.584 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:48.584 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:48.584 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:48.584 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:48.584 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:48.584 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:48.584 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:48.584 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:48.584 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:48.584 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:48.584 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:48.585 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:48.585 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:48.585 Found net devices under 0000:18:00.0: mlx_0_0 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:48.585 Found net devices under 0000:18:00.1: mlx_0_1 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # rdma_device_init 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # uname 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@528 -- # allocate_nic_ips 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:48.585 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:48.585 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:08:48.585 altname enp24s0f0np0 00:08:48.585 altname ens785f0np0 00:08:48.585 inet 192.168.100.8/24 scope global mlx_0_0 00:08:48.585 valid_lft forever preferred_lft forever 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:48.585 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:48.845 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:48.845 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:08:48.845 altname enp24s0f1np1 00:08:48.845 altname ens785f1np1 00:08:48.845 inet 192.168.100.9/24 scope global mlx_0_1 00:08:48.845 valid_lft forever preferred_lft forever 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:08:48.845 192.168.100.9' 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:08:48.845 192.168.100.9' 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # head -n 1 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:08:48.845 192.168.100.9' 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # tail -n +2 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # head -n 1 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:48.845 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:48.846 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:48.846 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:48.846 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=3317894 00:08:48.846 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:48.846 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 3317894 00:08:48.846 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 3317894 ']' 00:08:48.846 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.846 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:48.846 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.846 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:48.846 18:14:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:48.846 [2024-10-08 18:14:01.939664] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:08:48.846 [2024-10-08 18:14:01.939732] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.105 [2024-10-08 18:14:02.025355] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:49.105 [2024-10-08 18:14:02.115699] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:49.105 [2024-10-08 18:14:02.115749] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:49.105 [2024-10-08 18:14:02.115759] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:49.105 [2024-10-08 18:14:02.115768] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:49.105 [2024-10-08 18:14:02.115774] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:49.105 [2024-10-08 18:14:02.117170] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.105 [2024-10-08 18:14:02.117274] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:08:49.105 [2024-10-08 18:14:02.117375] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.105 [2024-10-08 18:14:02.117376] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:08:49.674 18:14:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:49.674 18:14:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:08:49.674 18:14:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:49.674 18:14:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:49.674 18:14:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.674 18:14:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.674 18:14:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:49.674 18:14:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.674 18:14:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.674 18:14:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.674 18:14:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:49.674 18:14:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.674 18:14:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.933 18:14:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.933 18:14:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:49.933 18:14:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.933 18:14:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.933 [2024-10-08 18:14:02.935505] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x801370/0x805860) succeed. 00:08:49.933 [2024-10-08 18:14:02.945543] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x8029b0/0x846f00) succeed. 00:08:49.933 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.933 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:49.933 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.933 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:50.193 Malloc0 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:50.193 [2024-10-08 18:14:03.144932] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3318103 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3318105 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:50.193 { 00:08:50.193 "params": { 00:08:50.193 "name": "Nvme$subsystem", 00:08:50.193 "trtype": "$TEST_TRANSPORT", 00:08:50.193 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:50.193 "adrfam": "ipv4", 00:08:50.193 "trsvcid": "$NVMF_PORT", 00:08:50.193 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:50.193 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:50.193 "hdgst": ${hdgst:-false}, 00:08:50.193 "ddgst": ${ddgst:-false} 00:08:50.193 }, 00:08:50.193 "method": "bdev_nvme_attach_controller" 00:08:50.193 } 00:08:50.193 EOF 00:08:50.193 )") 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3318107 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:50.193 { 00:08:50.193 "params": { 00:08:50.193 "name": "Nvme$subsystem", 00:08:50.193 "trtype": "$TEST_TRANSPORT", 00:08:50.193 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:50.193 "adrfam": "ipv4", 00:08:50.193 "trsvcid": "$NVMF_PORT", 00:08:50.193 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:50.193 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:50.193 "hdgst": ${hdgst:-false}, 00:08:50.193 "ddgst": ${ddgst:-false} 00:08:50.193 }, 00:08:50.193 "method": "bdev_nvme_attach_controller" 00:08:50.193 } 00:08:50.193 EOF 00:08:50.193 )") 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3318110 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:50.193 { 00:08:50.193 "params": { 00:08:50.193 "name": "Nvme$subsystem", 00:08:50.193 "trtype": "$TEST_TRANSPORT", 00:08:50.193 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:50.193 "adrfam": "ipv4", 00:08:50.193 "trsvcid": "$NVMF_PORT", 00:08:50.193 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:50.193 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:50.193 "hdgst": ${hdgst:-false}, 00:08:50.193 "ddgst": ${ddgst:-false} 00:08:50.193 }, 00:08:50.193 "method": "bdev_nvme_attach_controller" 00:08:50.193 } 00:08:50.193 EOF 00:08:50.193 )") 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:50.193 { 00:08:50.193 "params": { 00:08:50.193 "name": "Nvme$subsystem", 00:08:50.193 "trtype": "$TEST_TRANSPORT", 00:08:50.193 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:50.193 "adrfam": "ipv4", 00:08:50.193 "trsvcid": "$NVMF_PORT", 00:08:50.193 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:50.193 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:50.193 "hdgst": ${hdgst:-false}, 00:08:50.193 "ddgst": ${ddgst:-false} 00:08:50.193 }, 00:08:50.193 "method": "bdev_nvme_attach_controller" 00:08:50.193 } 00:08:50.193 EOF 00:08:50.193 )") 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3318103 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:50.193 "params": { 00:08:50.193 "name": "Nvme1", 00:08:50.193 "trtype": "rdma", 00:08:50.193 "traddr": "192.168.100.8", 00:08:50.193 "adrfam": "ipv4", 00:08:50.193 "trsvcid": "4420", 00:08:50.193 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:50.193 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:50.193 "hdgst": false, 00:08:50.193 "ddgst": false 00:08:50.193 }, 00:08:50.193 "method": "bdev_nvme_attach_controller" 00:08:50.193 }' 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:50.193 "params": { 00:08:50.193 "name": "Nvme1", 00:08:50.193 "trtype": "rdma", 00:08:50.193 "traddr": "192.168.100.8", 00:08:50.193 "adrfam": "ipv4", 00:08:50.193 "trsvcid": "4420", 00:08:50.193 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:50.193 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:50.193 "hdgst": false, 00:08:50.193 "ddgst": false 00:08:50.193 }, 00:08:50.193 "method": "bdev_nvme_attach_controller" 00:08:50.193 }' 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:50.193 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:50.193 "params": { 00:08:50.193 "name": "Nvme1", 00:08:50.193 "trtype": "rdma", 00:08:50.193 "traddr": "192.168.100.8", 00:08:50.194 "adrfam": "ipv4", 00:08:50.194 "trsvcid": "4420", 00:08:50.194 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:50.194 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:50.194 "hdgst": false, 00:08:50.194 "ddgst": false 00:08:50.194 }, 00:08:50.194 "method": "bdev_nvme_attach_controller" 00:08:50.194 }' 00:08:50.194 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:50.194 18:14:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:50.194 "params": { 00:08:50.194 "name": "Nvme1", 00:08:50.194 "trtype": "rdma", 00:08:50.194 "traddr": "192.168.100.8", 00:08:50.194 "adrfam": "ipv4", 00:08:50.194 "trsvcid": "4420", 00:08:50.194 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:50.194 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:50.194 "hdgst": false, 00:08:50.194 "ddgst": false 00:08:50.194 }, 00:08:50.194 "method": "bdev_nvme_attach_controller" 00:08:50.194 }' 00:08:50.194 [2024-10-08 18:14:03.199665] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:08:50.194 [2024-10-08 18:14:03.199710] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:08:50.194 [2024-10-08 18:14:03.199733] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:50.194 [2024-10-08 18:14:03.199765] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:50.194 [2024-10-08 18:14:03.200091] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:08:50.194 [2024-10-08 18:14:03.200138] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:50.194 [2024-10-08 18:14:03.204255] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:08:50.194 [2024-10-08 18:14:03.204313] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:50.453 [2024-10-08 18:14:03.401916] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.453 [2024-10-08 18:14:03.485742] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:08:50.453 [2024-10-08 18:14:03.498088] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.453 [2024-10-08 18:14:03.580918] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:08:50.712 [2024-10-08 18:14:03.631655] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.712 [2024-10-08 18:14:03.690714] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.712 [2024-10-08 18:14:03.722018] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:08:50.712 [2024-10-08 18:14:03.772637] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:08:50.712 Running I/O for 1 seconds... 00:08:50.971 Running I/O for 1 seconds... 00:08:51.231 Running I/O for 1 seconds... 00:08:51.231 Running I/O for 1 seconds... 00:08:51.799 22018.00 IOPS, 86.01 MiB/s 00:08:51.799 Latency(us) 00:08:51.799 [2024-10-08T16:14:04.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:51.799 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:51.799 Nvme1n1 : 1.01 22022.77 86.03 0.00 0.00 5794.65 4188.61 12252.38 00:08:51.799 [2024-10-08T16:14:04.972Z] =================================================================================================================== 00:08:51.799 [2024-10-08T16:14:04.972Z] Total : 22022.77 86.03 0.00 0.00 5794.65 4188.61 12252.38 00:08:52.059 18:14:05 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3318105 00:08:52.059 255312.00 IOPS, 997.31 MiB/s 00:08:52.059 Latency(us) 00:08:52.059 [2024-10-08T16:14:05.232Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.059 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:52.059 Nvme1n1 : 1.00 254928.43 995.81 0.00 0.00 499.53 227.95 1923.34 00:08:52.059 [2024-10-08T16:14:05.232Z] =================================================================================================================== 00:08:52.059 [2024-10-08T16:14:05.232Z] Total : 254928.43 995.81 0.00 0.00 499.53 227.95 1923.34 00:08:52.319 15324.00 IOPS, 59.86 MiB/s 00:08:52.319 Latency(us) 00:08:52.319 [2024-10-08T16:14:05.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.319 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:52.319 Nvme1n1 : 1.01 15372.83 60.05 0.00 0.00 8299.66 4673.00 17210.32 00:08:52.319 [2024-10-08T16:14:05.492Z] =================================================================================================================== 00:08:52.319 [2024-10-08T16:14:05.492Z] Total : 15372.83 60.05 0.00 0.00 8299.66 4673.00 17210.32 00:08:52.319 18291.00 IOPS, 71.45 MiB/s 00:08:52.319 Latency(us) 00:08:52.319 [2024-10-08T16:14:05.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.319 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:52.319 Nvme1n1 : 1.01 18385.04 71.82 0.00 0.00 6946.92 2692.67 16640.45 00:08:52.319 [2024-10-08T16:14:05.492Z] =================================================================================================================== 00:08:52.319 [2024-10-08T16:14:05.492Z] Total : 18385.04 71.82 0.00 0.00 6946.92 2692.67 16640.45 00:08:52.580 18:14:05 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3318107 00:08:52.580 18:14:05 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3318110 00:08:52.580 18:14:05 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:52.580 18:14:05 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.580 18:14:05 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.580 18:14:05 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.580 18:14:05 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:52.580 18:14:05 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:52.580 18:14:05 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:52.580 18:14:05 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:52.580 18:14:05 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:52.580 18:14:05 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:52.580 18:14:05 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:52.580 18:14:05 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:52.580 18:14:05 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:52.580 rmmod nvme_rdma 00:08:52.580 rmmod nvme_fabrics 00:08:52.580 18:14:05 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:52.580 18:14:05 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:52.580 18:14:05 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:52.580 18:14:05 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 3317894 ']' 00:08:52.580 18:14:05 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 3317894 00:08:52.580 18:14:05 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 3317894 ']' 00:08:52.580 18:14:05 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 3317894 00:08:52.580 18:14:05 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:08:52.580 18:14:05 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:52.581 18:14:05 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3317894 00:08:52.581 18:14:05 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:52.581 18:14:05 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:52.581 18:14:05 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3317894' 00:08:52.581 killing process with pid 3317894 00:08:52.581 18:14:05 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 3317894 00:08:52.581 18:14:05 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 3317894 00:08:52.840 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:52.840 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:08:52.840 00:08:52.840 real 0m11.294s 00:08:52.840 user 0m23.323s 00:08:52.840 sys 0m7.157s 00:08:52.840 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:52.840 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.840 ************************************ 00:08:52.840 END TEST nvmf_bdev_io_wait 00:08:52.840 ************************************ 00:08:53.099 18:14:06 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:08:53.099 18:14:06 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:53.099 18:14:06 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.099 18:14:06 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:53.099 ************************************ 00:08:53.099 START TEST nvmf_queue_depth 00:08:53.099 ************************************ 00:08:53.099 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:08:53.099 * Looking for test storage... 00:08:53.099 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:53.099 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:53.099 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:08:53.099 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:53.359 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:53.359 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:53.359 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:53.359 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:53.359 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:53.359 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:53.359 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:53.359 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:53.359 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:53.359 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:53.359 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:53.359 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:53.359 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:53.359 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:53.359 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:53.359 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:53.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.360 --rc genhtml_branch_coverage=1 00:08:53.360 --rc genhtml_function_coverage=1 00:08:53.360 --rc genhtml_legend=1 00:08:53.360 --rc geninfo_all_blocks=1 00:08:53.360 --rc geninfo_unexecuted_blocks=1 00:08:53.360 00:08:53.360 ' 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:53.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.360 --rc genhtml_branch_coverage=1 00:08:53.360 --rc genhtml_function_coverage=1 00:08:53.360 --rc genhtml_legend=1 00:08:53.360 --rc geninfo_all_blocks=1 00:08:53.360 --rc geninfo_unexecuted_blocks=1 00:08:53.360 00:08:53.360 ' 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:53.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.360 --rc genhtml_branch_coverage=1 00:08:53.360 --rc genhtml_function_coverage=1 00:08:53.360 --rc genhtml_legend=1 00:08:53.360 --rc geninfo_all_blocks=1 00:08:53.360 --rc geninfo_unexecuted_blocks=1 00:08:53.360 00:08:53.360 ' 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:53.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.360 --rc genhtml_branch_coverage=1 00:08:53.360 --rc genhtml_function_coverage=1 00:08:53.360 --rc genhtml_legend=1 00:08:53.360 --rc geninfo_all_blocks=1 00:08:53.360 --rc geninfo_unexecuted_blocks=1 00:08:53.360 00:08:53.360 ' 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:53.360 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:53.360 18:14:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:59.934 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:59.934 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:59.934 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:59.934 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:59.934 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:59.934 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:59.934 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:59.934 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:59.934 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:59.934 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:59.934 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:59.934 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:59.934 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:59.934 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:59.934 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:59.934 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:59.935 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:59.935 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:59.935 Found net devices under 0000:18:00.0: mlx_0_0 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:59.935 Found net devices under 0000:18:00.1: mlx_0_1 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # rdma_device_init 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # uname 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:59.935 18:14:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:59.935 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@528 -- # allocate_nic_ips 00:08:59.935 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:59.935 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:59.935 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:59.935 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:59.935 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:59.935 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:59.935 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:59.935 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:59.935 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:59.935 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:59.935 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:59.935 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:08:59.935 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:59.935 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:59.935 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:59.935 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:59.935 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:59.935 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:59.935 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:08:59.935 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:59.935 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:59.935 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:59.935 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:59.935 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:59.935 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:59.935 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:59.935 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:59.935 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:59.935 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:59.935 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:08:59.935 altname enp24s0f0np0 00:08:59.935 altname ens785f0np0 00:08:59.935 inet 192.168.100.8/24 scope global mlx_0_0 00:08:59.935 valid_lft forever preferred_lft forever 00:08:59.935 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:59.935 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:59.935 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:59.935 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:59.935 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:59.935 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:59.935 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:59.935 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:59.935 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:59.935 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:59.935 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:08:59.935 altname enp24s0f1np1 00:08:59.936 altname ens785f1np1 00:08:59.936 inet 192.168.100.9/24 scope global mlx_0_1 00:08:59.936 valid_lft forever preferred_lft forever 00:08:59.936 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:08:59.936 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:59.936 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:59.936 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:08:59.936 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:08:59.936 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:59.936 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:59.936 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:59.936 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:59.936 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:09:00.196 192.168.100.9' 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:09:00.196 192.168.100.9' 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # head -n 1 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:09:00.196 192.168.100.9' 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # tail -n +2 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # head -n 1 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=3321429 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 3321429 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3321429 ']' 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:00.196 18:14:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:00.196 [2024-10-08 18:14:13.254160] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:09:00.196 [2024-10-08 18:14:13.254224] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.196 [2024-10-08 18:14:13.342569] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.456 [2024-10-08 18:14:13.432662] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:00.456 [2024-10-08 18:14:13.432703] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:00.456 [2024-10-08 18:14:13.432713] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:00.456 [2024-10-08 18:14:13.432721] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:00.456 [2024-10-08 18:14:13.432728] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:00.456 [2024-10-08 18:14:13.433200] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.026 18:14:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:01.026 18:14:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:01.026 18:14:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:01.026 18:14:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:01.026 18:14:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:01.026 18:14:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:01.026 18:14:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:01.026 18:14:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.026 18:14:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:01.026 [2024-10-08 18:14:14.173162] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x24fd3b0/0x25018a0) succeed. 00:09:01.026 [2024-10-08 18:14:14.182034] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x24fe8b0/0x2542f40) succeed. 00:09:01.285 18:14:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.285 18:14:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:01.285 18:14:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.285 18:14:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:01.285 Malloc0 00:09:01.285 18:14:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.285 18:14:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:01.285 18:14:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.285 18:14:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:01.285 18:14:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.285 18:14:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:01.285 18:14:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.285 18:14:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:01.285 18:14:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.285 18:14:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:01.285 18:14:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.285 18:14:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:01.285 [2024-10-08 18:14:14.284420] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:01.285 18:14:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.285 18:14:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3321608 00:09:01.285 18:14:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:01.285 18:14:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:01.285 18:14:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3321608 /var/tmp/bdevperf.sock 00:09:01.285 18:14:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3321608 ']' 00:09:01.285 18:14:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:01.285 18:14:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:01.285 18:14:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:01.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:01.285 18:14:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:01.285 18:14:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:01.285 [2024-10-08 18:14:14.336190] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:09:01.285 [2024-10-08 18:14:14.336244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3321608 ] 00:09:01.285 [2024-10-08 18:14:14.420252] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.544 [2024-10-08 18:14:14.508931] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.113 18:14:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:02.113 18:14:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:02.113 18:14:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:02.114 18:14:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.114 18:14:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.373 NVMe0n1 00:09:02.373 18:14:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.373 18:14:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:02.373 Running I/O for 10 seconds... 00:09:04.249 17105.00 IOPS, 66.82 MiB/s [2024-10-08T16:14:18.802Z] 17408.00 IOPS, 68.00 MiB/s [2024-10-08T16:14:19.740Z] 17408.00 IOPS, 68.00 MiB/s [2024-10-08T16:14:20.678Z] 17513.75 IOPS, 68.41 MiB/s [2024-10-08T16:14:21.617Z] 17469.60 IOPS, 68.24 MiB/s [2024-10-08T16:14:22.555Z] 17477.17 IOPS, 68.27 MiB/s [2024-10-08T16:14:23.495Z] 17544.29 IOPS, 68.53 MiB/s [2024-10-08T16:14:24.433Z] 17536.00 IOPS, 68.50 MiB/s [2024-10-08T16:14:25.814Z] 17537.78 IOPS, 68.51 MiB/s [2024-10-08T16:14:25.814Z] 17575.20 IOPS, 68.65 MiB/s 00:09:12.641 Latency(us) 00:09:12.641 [2024-10-08T16:14:25.814Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.641 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:12.641 Verification LBA range: start 0x0 length 0x4000 00:09:12.641 NVMe0n1 : 10.04 17590.74 68.71 0.00 0.00 58035.34 10485.76 37156.06 00:09:12.641 [2024-10-08T16:14:25.814Z] =================================================================================================================== 00:09:12.641 [2024-10-08T16:14:25.814Z] Total : 17590.74 68.71 0.00 0.00 58035.34 10485.76 37156.06 00:09:12.641 { 00:09:12.641 "results": [ 00:09:12.641 { 00:09:12.641 "job": "NVMe0n1", 00:09:12.641 "core_mask": "0x1", 00:09:12.641 "workload": "verify", 00:09:12.641 "status": "finished", 00:09:12.641 "verify_range": { 00:09:12.641 "start": 0, 00:09:12.641 "length": 16384 00:09:12.641 }, 00:09:12.641 "queue_depth": 1024, 00:09:12.641 "io_size": 4096, 00:09:12.641 "runtime": 10.038466, 00:09:12.641 "iops": 17590.735476914502, 00:09:12.641 "mibps": 68.71381045669727, 00:09:12.641 "io_failed": 0, 00:09:12.641 "io_timeout": 0, 00:09:12.641 "avg_latency_us": 58035.34206898453, 00:09:12.641 "min_latency_us": 10485.76, 00:09:12.641 "max_latency_us": 37156.062608695654 00:09:12.641 } 00:09:12.641 ], 00:09:12.641 "core_count": 1 00:09:12.641 } 00:09:12.641 18:14:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3321608 00:09:12.641 18:14:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3321608 ']' 00:09:12.641 18:14:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3321608 00:09:12.641 18:14:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:12.641 18:14:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:12.641 18:14:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3321608 00:09:12.641 18:14:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:12.641 18:14:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:12.641 18:14:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3321608' 00:09:12.641 killing process with pid 3321608 00:09:12.641 18:14:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3321608 00:09:12.641 Received shutdown signal, test time was about 10.000000 seconds 00:09:12.641 00:09:12.641 Latency(us) 00:09:12.641 [2024-10-08T16:14:25.814Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.641 [2024-10-08T16:14:25.814Z] =================================================================================================================== 00:09:12.641 [2024-10-08T16:14:25.814Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:12.641 18:14:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3321608 00:09:12.641 18:14:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:12.641 18:14:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:12.641 18:14:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:12.641 18:14:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:12.641 18:14:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:12.641 18:14:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:12.641 18:14:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:12.641 18:14:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:12.641 18:14:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:12.641 rmmod nvme_rdma 00:09:12.641 rmmod nvme_fabrics 00:09:12.641 18:14:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:12.641 18:14:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:12.641 18:14:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:12.641 18:14:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 3321429 ']' 00:09:12.641 18:14:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 3321429 00:09:12.641 18:14:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3321429 ']' 00:09:12.901 18:14:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3321429 00:09:12.901 18:14:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:12.901 18:14:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:12.901 18:14:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3321429 00:09:12.901 18:14:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:12.901 18:14:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:12.901 18:14:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3321429' 00:09:12.901 killing process with pid 3321429 00:09:12.901 18:14:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3321429 00:09:12.901 18:14:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3321429 00:09:13.161 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:13.161 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:09:13.161 00:09:13.161 real 0m20.042s 00:09:13.161 user 0m26.622s 00:09:13.161 sys 0m6.078s 00:09:13.161 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:13.161 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:13.161 ************************************ 00:09:13.161 END TEST nvmf_queue_depth 00:09:13.161 ************************************ 00:09:13.161 18:14:26 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:09:13.161 18:14:26 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:13.161 18:14:26 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:13.161 18:14:26 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:13.161 ************************************ 00:09:13.161 START TEST nvmf_target_multipath 00:09:13.161 ************************************ 00:09:13.161 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:09:13.161 * Looking for test storage... 00:09:13.161 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:13.161 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:13.421 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:09:13.421 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:13.421 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:13.421 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:13.421 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:13.421 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:13.421 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:13.421 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:13.421 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:13.421 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:13.421 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:13.421 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:13.421 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:13.421 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:13.421 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:13.421 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:13.421 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:13.421 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:13.421 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:13.421 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:13.421 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:13.421 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:13.421 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:13.421 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:13.421 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:13.421 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:13.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.422 --rc genhtml_branch_coverage=1 00:09:13.422 --rc genhtml_function_coverage=1 00:09:13.422 --rc genhtml_legend=1 00:09:13.422 --rc geninfo_all_blocks=1 00:09:13.422 --rc geninfo_unexecuted_blocks=1 00:09:13.422 00:09:13.422 ' 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:13.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.422 --rc genhtml_branch_coverage=1 00:09:13.422 --rc genhtml_function_coverage=1 00:09:13.422 --rc genhtml_legend=1 00:09:13.422 --rc geninfo_all_blocks=1 00:09:13.422 --rc geninfo_unexecuted_blocks=1 00:09:13.422 00:09:13.422 ' 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:13.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.422 --rc genhtml_branch_coverage=1 00:09:13.422 --rc genhtml_function_coverage=1 00:09:13.422 --rc genhtml_legend=1 00:09:13.422 --rc geninfo_all_blocks=1 00:09:13.422 --rc geninfo_unexecuted_blocks=1 00:09:13.422 00:09:13.422 ' 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:13.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.422 --rc genhtml_branch_coverage=1 00:09:13.422 --rc genhtml_function_coverage=1 00:09:13.422 --rc genhtml_legend=1 00:09:13.422 --rc geninfo_all_blocks=1 00:09:13.422 --rc geninfo_unexecuted_blocks=1 00:09:13.422 00:09:13.422 ' 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:13.422 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:13.422 18:14:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:20.053 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:20.053 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:20.053 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:20.053 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:20.053 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:20.053 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:20.053 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:20.053 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:20.053 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:20.053 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:20.053 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:20.053 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:20.053 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:20.053 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:20.053 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:20.053 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:20.053 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:20.053 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:20.053 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:20.053 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:20.053 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:20.053 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:20.053 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:20.053 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:20.053 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:20.053 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:20.053 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:20.053 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:20.053 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:20.053 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:20.053 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:20.053 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:20.053 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:20.053 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:20.053 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:20.053 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:09:20.053 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:09:20.053 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:20.053 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:20.053 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:20.053 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:09:20.054 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:09:20.054 Found net devices under 0000:18:00.0: mlx_0_0 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:09:20.054 Found net devices under 0000:18:00.1: mlx_0_1 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # rdma_device_init 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # uname 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@528 -- # allocate_nic_ips 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:20.054 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:20.314 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:20.314 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:09:20.314 altname enp24s0f0np0 00:09:20.314 altname ens785f0np0 00:09:20.314 inet 192.168.100.8/24 scope global mlx_0_0 00:09:20.314 valid_lft forever preferred_lft forever 00:09:20.314 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:20.314 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:20.314 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:20.314 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:20.314 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:20.314 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:20.314 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:20.314 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:20.314 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:20.314 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:20.314 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:09:20.314 altname enp24s0f1np1 00:09:20.314 altname ens785f1np1 00:09:20.315 inet 192.168.100.9/24 scope global mlx_0_1 00:09:20.315 valid_lft forever preferred_lft forever 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:09:20.315 192.168.100.9' 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:09:20.315 192.168.100.9' 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # head -n 1 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:09:20.315 192.168.100.9' 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # tail -n +2 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # head -n 1 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:09:20.315 run this test only with TCP transport for now 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:20.315 rmmod nvme_rdma 00:09:20.315 rmmod nvme_fabrics 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:09:20.315 00:09:20.315 real 0m7.215s 00:09:20.315 user 0m2.042s 00:09:20.315 sys 0m5.387s 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:20.315 ************************************ 00:09:20.315 END TEST nvmf_target_multipath 00:09:20.315 ************************************ 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:20.315 18:14:33 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:20.575 ************************************ 00:09:20.575 START TEST nvmf_zcopy 00:09:20.575 ************************************ 00:09:20.575 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:09:20.575 * Looking for test storage... 00:09:20.575 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:20.575 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:20.575 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:09:20.575 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:20.575 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:20.575 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:20.575 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:20.575 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:20.575 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.575 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:20.575 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:20.575 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:20.575 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:20.575 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:20.575 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:20.575 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:20.575 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:20.575 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:20.575 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:20.575 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.575 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:20.575 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:20.575 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.575 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:20.575 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:20.575 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:20.575 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:20.575 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.575 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:20.575 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:20.575 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:20.575 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:20.575 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:20.575 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.575 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:20.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.575 --rc genhtml_branch_coverage=1 00:09:20.575 --rc genhtml_function_coverage=1 00:09:20.575 --rc genhtml_legend=1 00:09:20.575 --rc geninfo_all_blocks=1 00:09:20.576 --rc geninfo_unexecuted_blocks=1 00:09:20.576 00:09:20.576 ' 00:09:20.576 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:20.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.576 --rc genhtml_branch_coverage=1 00:09:20.576 --rc genhtml_function_coverage=1 00:09:20.576 --rc genhtml_legend=1 00:09:20.576 --rc geninfo_all_blocks=1 00:09:20.576 --rc geninfo_unexecuted_blocks=1 00:09:20.576 00:09:20.576 ' 00:09:20.576 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:20.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.576 --rc genhtml_branch_coverage=1 00:09:20.576 --rc genhtml_function_coverage=1 00:09:20.576 --rc genhtml_legend=1 00:09:20.576 --rc geninfo_all_blocks=1 00:09:20.576 --rc geninfo_unexecuted_blocks=1 00:09:20.576 00:09:20.576 ' 00:09:20.576 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:20.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.576 --rc genhtml_branch_coverage=1 00:09:20.576 --rc genhtml_function_coverage=1 00:09:20.576 --rc genhtml_legend=1 00:09:20.576 --rc geninfo_all_blocks=1 00:09:20.576 --rc geninfo_unexecuted_blocks=1 00:09:20.576 00:09:20.576 ' 00:09:20.576 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:20.576 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:20.576 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:20.576 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:20.576 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:20.576 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:20.576 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:20.576 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:20.576 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:20.576 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:20.576 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:20.576 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:20.836 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:09:20.836 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:09:20.836 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:20.836 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:20.836 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:20.836 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:20.836 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:20.836 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:20.836 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.836 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.836 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.836 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.836 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.836 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.836 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:20.836 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.836 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:20.836 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:20.836 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:20.836 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:20.836 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:20.836 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:20.836 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:20.836 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:20.836 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:20.836 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:20.836 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:20.836 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:20.836 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:09:20.836 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:20.836 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:20.836 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:20.836 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:20.836 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.836 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:20.836 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.836 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:20.836 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:20.836 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:20.836 18:14:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:09:27.412 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:09:27.412 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:09:27.412 Found net devices under 0000:18:00.0: mlx_0_0 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:09:27.412 Found net devices under 0000:18:00.1: mlx_0_1 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # rdma_device_init 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # uname 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@528 -- # allocate_nic_ips 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:27.412 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:27.413 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:27.413 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:09:27.413 altname enp24s0f0np0 00:09:27.413 altname ens785f0np0 00:09:27.413 inet 192.168.100.8/24 scope global mlx_0_0 00:09:27.413 valid_lft forever preferred_lft forever 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:27.413 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:27.413 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:09:27.413 altname enp24s0f1np1 00:09:27.413 altname ens785f1np1 00:09:27.413 inet 192.168.100.9/24 scope global mlx_0_1 00:09:27.413 valid_lft forever preferred_lft forever 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:27.413 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:09:27.413 192.168.100.9' 00:09:27.673 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:09:27.673 192.168.100.9' 00:09:27.673 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # head -n 1 00:09:27.673 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:27.673 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:09:27.673 192.168.100.9' 00:09:27.673 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # tail -n +2 00:09:27.673 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # head -n 1 00:09:27.673 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:27.673 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:09:27.673 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:27.673 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:09:27.673 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:09:27.673 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:09:27.673 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:27.673 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:27.673 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:27.673 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.673 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=3329060 00:09:27.673 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:27.673 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 3329060 00:09:27.673 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 3329060 ']' 00:09:27.673 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.673 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:27.673 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.673 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:27.673 18:14:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.673 [2024-10-08 18:14:40.686895] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:09:27.673 [2024-10-08 18:14:40.686959] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.673 [2024-10-08 18:14:40.771682] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.933 [2024-10-08 18:14:40.858835] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:27.933 [2024-10-08 18:14:40.858880] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:27.933 [2024-10-08 18:14:40.858890] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:27.933 [2024-10-08 18:14:40.858899] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:27.933 [2024-10-08 18:14:40.858906] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:27.933 [2024-10-08 18:14:40.859366] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.547 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:28.547 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:09:28.547 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:28.547 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:28.547 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:28.547 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:28.547 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:09:28.547 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:09:28.547 Unsupported transport: rdma 00:09:28.547 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:09:28.547 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:09:28.547 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@808 -- # type=--id 00:09:28.547 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@809 -- # id=0 00:09:28.547 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:28.547 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:28.547 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:28.547 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:28.547 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:28.547 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:28.547 nvmf_trace.0 00:09:28.547 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@823 -- # return 0 00:09:28.547 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:09:28.547 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:28.547 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:28.547 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:28.547 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:28.547 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:28.547 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:28.547 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:28.547 rmmod nvme_rdma 00:09:28.547 rmmod nvme_fabrics 00:09:28.547 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:28.547 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:28.547 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:28.547 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 3329060 ']' 00:09:28.547 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 3329060 00:09:28.547 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 3329060 ']' 00:09:28.547 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 3329060 00:09:28.547 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:28.547 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:28.547 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3329060 00:09:28.806 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:28.806 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:28.806 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3329060' 00:09:28.806 killing process with pid 3329060 00:09:28.806 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 3329060 00:09:28.806 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 3329060 00:09:28.807 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:28.807 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:09:28.807 00:09:28.807 real 0m8.428s 00:09:28.807 user 0m3.542s 00:09:28.807 sys 0m5.704s 00:09:28.807 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:28.807 18:14:41 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:28.807 ************************************ 00:09:28.807 END TEST nvmf_zcopy 00:09:28.807 ************************************ 00:09:29.066 18:14:42 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:09:29.066 18:14:42 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:29.066 18:14:42 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:29.066 18:14:42 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:29.066 ************************************ 00:09:29.066 START TEST nvmf_nmic 00:09:29.066 ************************************ 00:09:29.066 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:09:29.066 * Looking for test storage... 00:09:29.066 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:29.066 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:29.066 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:09:29.066 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:29.066 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:29.066 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:29.066 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:29.066 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:29.066 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:29.066 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:29.066 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:29.066 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:29.066 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:29.066 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:29.066 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:29.066 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:29.066 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:29.066 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:29.066 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:29.066 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:29.066 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:29.066 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:29.066 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:29.066 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:29.066 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:29.066 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:29.066 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:29.066 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:29.066 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:29.066 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:29.066 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:29.066 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:29.066 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:29.067 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:29.067 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:29.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.067 --rc genhtml_branch_coverage=1 00:09:29.067 --rc genhtml_function_coverage=1 00:09:29.067 --rc genhtml_legend=1 00:09:29.067 --rc geninfo_all_blocks=1 00:09:29.067 --rc geninfo_unexecuted_blocks=1 00:09:29.067 00:09:29.067 ' 00:09:29.067 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:29.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.067 --rc genhtml_branch_coverage=1 00:09:29.067 --rc genhtml_function_coverage=1 00:09:29.067 --rc genhtml_legend=1 00:09:29.067 --rc geninfo_all_blocks=1 00:09:29.067 --rc geninfo_unexecuted_blocks=1 00:09:29.067 00:09:29.067 ' 00:09:29.067 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:29.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.067 --rc genhtml_branch_coverage=1 00:09:29.067 --rc genhtml_function_coverage=1 00:09:29.067 --rc genhtml_legend=1 00:09:29.067 --rc geninfo_all_blocks=1 00:09:29.067 --rc geninfo_unexecuted_blocks=1 00:09:29.067 00:09:29.067 ' 00:09:29.067 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:29.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.067 --rc genhtml_branch_coverage=1 00:09:29.067 --rc genhtml_function_coverage=1 00:09:29.067 --rc genhtml_legend=1 00:09:29.067 --rc geninfo_all_blocks=1 00:09:29.067 --rc geninfo_unexecuted_blocks=1 00:09:29.067 00:09:29.067 ' 00:09:29.067 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:29.067 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:29.067 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.067 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.067 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.067 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:29.327 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:29.327 18:14:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:35.903 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:35.903 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:35.903 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:35.903 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:35.903 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:35.903 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:35.903 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:35.903 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:35.903 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:35.903 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:35.903 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:35.903 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:35.903 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:35.903 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:35.903 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:35.903 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:35.903 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:35.903 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:35.903 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:35.903 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:35.903 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:35.903 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:35.903 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:35.903 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:35.903 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:35.903 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:35.903 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:35.903 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:35.903 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:35.903 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:35.903 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:09:35.904 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:09:35.904 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:09:35.904 Found net devices under 0000:18:00.0: mlx_0_0 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:09:35.904 Found net devices under 0000:18:00.1: mlx_0_1 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # rdma_device_init 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # uname 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@528 -- # allocate_nic_ips 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:35.904 18:14:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:35.904 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:35.904 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:35.904 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:35.904 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:35.904 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:35.904 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:09:35.904 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:35.904 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:35.904 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:35.904 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:35.904 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:35.904 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:35.904 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:09:35.904 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:35.904 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:35.904 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:35.904 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:35.904 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:35.904 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:35.904 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:35.904 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:35.904 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:35.904 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:35.904 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:09:35.904 altname enp24s0f0np0 00:09:35.904 altname ens785f0np0 00:09:35.904 inet 192.168.100.8/24 scope global mlx_0_0 00:09:35.904 valid_lft forever preferred_lft forever 00:09:35.904 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:35.904 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:35.904 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:35.904 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:35.904 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:35.904 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:35.904 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:35.904 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:35.904 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:35.904 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:35.904 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:09:35.904 altname enp24s0f1np1 00:09:35.904 altname ens785f1np1 00:09:35.904 inet 192.168.100.9/24 scope global mlx_0_1 00:09:35.904 valid_lft forever preferred_lft forever 00:09:35.904 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:09:35.904 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:35.904 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:35.904 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:09:36.164 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:09:36.164 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:36.164 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:36.164 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:36.164 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:36.164 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:36.164 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:36.164 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:36.164 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:36.164 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:36.164 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:36.164 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:09:36.164 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:36.164 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:36.164 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:36.164 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:36.164 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:36.164 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:36.164 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:09:36.164 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:36.164 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:36.164 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:36.164 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:36.164 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:36.164 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:36.164 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:36.164 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:36.164 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:36.164 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:36.164 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:36.164 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:36.164 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:09:36.164 192.168.100.9' 00:09:36.164 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:09:36.164 192.168.100.9' 00:09:36.164 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # head -n 1 00:09:36.164 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:36.164 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:09:36.164 192.168.100.9' 00:09:36.164 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # tail -n +2 00:09:36.164 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # head -n 1 00:09:36.165 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:36.165 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:09:36.165 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:36.165 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:09:36.165 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:09:36.165 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:09:36.165 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:36.165 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:36.165 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:36.165 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:36.165 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=3332170 00:09:36.165 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:36.165 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 3332170 00:09:36.165 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 3332170 ']' 00:09:36.165 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.165 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:36.165 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.165 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:36.165 18:14:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:36.165 [2024-10-08 18:14:49.245313] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:09:36.165 [2024-10-08 18:14:49.245375] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:36.165 [2024-10-08 18:14:49.331308] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:36.424 [2024-10-08 18:14:49.426255] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:36.424 [2024-10-08 18:14:49.426305] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:36.424 [2024-10-08 18:14:49.426315] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:36.424 [2024-10-08 18:14:49.426325] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:36.424 [2024-10-08 18:14:49.426333] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:36.424 [2024-10-08 18:14:49.427655] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.424 [2024-10-08 18:14:49.427759] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:09:36.424 [2024-10-08 18:14:49.427859] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.424 [2024-10-08 18:14:49.427860] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:09:36.998 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:36.998 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:36.998 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:36.998 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:36.998 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:36.998 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:36.998 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:36.998 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.998 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:37.259 [2024-10-08 18:14:50.172737] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x21622e0/0x21667d0) succeed. 00:09:37.259 [2024-10-08 18:14:50.183282] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2163920/0x21a7e70) succeed. 00:09:37.259 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.259 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:37.259 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.260 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:37.260 Malloc0 00:09:37.260 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.260 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:37.260 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.260 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:37.260 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.260 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:37.260 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.260 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:37.260 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.260 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:37.260 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.260 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:37.260 [2024-10-08 18:14:50.350908] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:37.260 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.260 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:37.260 test case1: single bdev can't be used in multiple subsystems 00:09:37.260 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:37.260 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.260 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:37.260 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.260 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:09:37.260 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.260 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:37.260 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.260 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:37.260 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:37.260 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.260 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:37.260 [2024-10-08 18:14:50.374770] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:37.260 [2024-10-08 18:14:50.374795] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:37.260 [2024-10-08 18:14:50.374805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.260 request: 00:09:37.260 { 00:09:37.260 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:37.260 "namespace": { 00:09:37.260 "bdev_name": "Malloc0", 00:09:37.260 "no_auto_visible": false 00:09:37.260 }, 00:09:37.260 "method": "nvmf_subsystem_add_ns", 00:09:37.260 "req_id": 1 00:09:37.260 } 00:09:37.260 Got JSON-RPC error response 00:09:37.260 response: 00:09:37.260 { 00:09:37.260 "code": -32602, 00:09:37.260 "message": "Invalid parameters" 00:09:37.260 } 00:09:37.260 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:37.260 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:37.260 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:37.260 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:37.260 Adding namespace failed - expected result. 00:09:37.260 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:37.260 test case2: host connect to nvmf target in multiple paths 00:09:37.260 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:09:37.260 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.260 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:37.260 [2024-10-08 18:14:50.390855] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:09:37.260 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.260 18:14:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:38.640 18:14:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:09:39.579 18:14:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:39.579 18:14:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:39.579 18:14:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:39.579 18:14:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:39.579 18:14:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:41.487 18:14:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:41.487 18:14:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:41.487 18:14:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:41.487 18:14:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:41.487 18:14:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:41.487 18:14:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:41.487 18:14:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:41.487 [global] 00:09:41.487 thread=1 00:09:41.487 invalidate=1 00:09:41.487 rw=write 00:09:41.487 time_based=1 00:09:41.487 runtime=1 00:09:41.487 ioengine=libaio 00:09:41.487 direct=1 00:09:41.487 bs=4096 00:09:41.487 iodepth=1 00:09:41.487 norandommap=0 00:09:41.487 numjobs=1 00:09:41.487 00:09:41.487 verify_dump=1 00:09:41.487 verify_backlog=512 00:09:41.487 verify_state_save=0 00:09:41.487 do_verify=1 00:09:41.487 verify=crc32c-intel 00:09:41.487 [job0] 00:09:41.487 filename=/dev/nvme0n1 00:09:41.487 Could not set queue depth (nvme0n1) 00:09:41.746 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.746 fio-3.35 00:09:41.746 Starting 1 thread 00:09:43.126 00:09:43.126 job0: (groupid=0, jobs=1): err= 0: pid=3333037: Tue Oct 8 18:14:55 2024 00:09:43.126 read: IOPS=6671, BW=26.1MiB/s (27.3MB/s)(26.1MiB/1001msec) 00:09:43.126 slat (nsec): min=6180, max=35744, avg=9029.31, stdev=1190.79 00:09:43.126 clat (usec): min=48, max=101, avg=60.38, stdev= 4.11 00:09:43.126 lat (usec): min=55, max=129, avg=69.41, stdev= 4.44 00:09:43.126 clat percentiles (usec): 00:09:43.126 | 1.00th=[ 53], 5.00th=[ 55], 10.00th=[ 56], 20.00th=[ 58], 00:09:43.126 | 30.00th=[ 59], 40.00th=[ 60], 50.00th=[ 61], 60.00th=[ 62], 00:09:43.126 | 70.00th=[ 63], 80.00th=[ 64], 90.00th=[ 67], 95.00th=[ 68], 00:09:43.126 | 99.00th=[ 72], 99.50th=[ 75], 99.90th=[ 81], 99.95th=[ 89], 00:09:43.126 | 99.99th=[ 102] 00:09:43.126 write: IOPS=7160, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1001msec); 0 zone resets 00:09:43.126 slat (nsec): min=10509, max=41301, avg=11853.56, stdev=1258.11 00:09:43.126 clat (nsec): min=33862, max=92354, avg=57714.69, stdev=3978.30 00:09:43.126 lat (usec): min=58, max=133, avg=69.57, stdev= 4.16 00:09:43.126 clat percentiles (nsec): 00:09:43.126 | 1.00th=[49920], 5.00th=[51968], 10.00th=[52992], 20.00th=[54528], 00:09:43.126 | 30.00th=[55552], 40.00th=[56576], 50.00th=[57600], 60.00th=[58112], 00:09:43.126 | 70.00th=[59648], 80.00th=[60672], 90.00th=[63232], 95.00th=[64768], 00:09:43.126 | 99.00th=[68096], 99.50th=[70144], 99.90th=[74240], 99.95th=[76288], 00:09:43.126 | 99.99th=[92672] 00:09:43.126 bw ( KiB/s): min=28672, max=28672, per=100.00%, avg=28672.00, stdev= 0.00, samples=1 00:09:43.126 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=1 00:09:43.126 lat (usec) : 50=0.53%, 100=99.46%, 250=0.01% 00:09:43.126 cpu : usr=9.80%, sys=14.10%, ctx=13846, majf=0, minf=1 00:09:43.126 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:43.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.126 issued rwts: total=6678,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.126 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:43.126 00:09:43.126 Run status group 0 (all jobs): 00:09:43.126 READ: bw=26.1MiB/s (27.3MB/s), 26.1MiB/s-26.1MiB/s (27.3MB/s-27.3MB/s), io=26.1MiB (27.4MB), run=1001-1001msec 00:09:43.126 WRITE: bw=28.0MiB/s (29.3MB/s), 28.0MiB/s-28.0MiB/s (29.3MB/s-29.3MB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:09:43.126 00:09:43.126 Disk stats (read/write): 00:09:43.126 nvme0n1: ios=6194/6336, merge=0/0, ticks=323/332, in_queue=655, util=90.78% 00:09:43.126 18:14:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:45.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:45.031 18:14:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:45.031 18:14:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:45.031 18:14:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:45.032 18:14:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:45.032 18:14:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:45.032 18:14:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:45.032 18:14:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:45.032 18:14:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:45.032 18:14:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:45.032 18:14:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:45.032 18:14:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:45.032 18:14:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:45.032 18:14:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:45.032 18:14:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:45.032 18:14:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:45.032 18:14:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:45.032 rmmod nvme_rdma 00:09:45.032 rmmod nvme_fabrics 00:09:45.032 18:14:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:45.032 18:14:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:45.032 18:14:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:45.032 18:14:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 3332170 ']' 00:09:45.032 18:14:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 3332170 00:09:45.032 18:14:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 3332170 ']' 00:09:45.032 18:14:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 3332170 00:09:45.032 18:14:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:09:45.032 18:14:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:45.032 18:14:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3332170 00:09:45.032 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:45.032 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:45.032 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3332170' 00:09:45.032 killing process with pid 3332170 00:09:45.032 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 3332170 00:09:45.032 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 3332170 00:09:45.291 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:45.291 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:09:45.291 00:09:45.291 real 0m16.308s 00:09:45.291 user 0m40.470s 00:09:45.291 sys 0m6.383s 00:09:45.291 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:45.291 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:45.291 ************************************ 00:09:45.291 END TEST nvmf_nmic 00:09:45.291 ************************************ 00:09:45.291 18:14:58 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:09:45.291 18:14:58 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:45.291 18:14:58 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:45.292 18:14:58 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:45.292 ************************************ 00:09:45.292 START TEST nvmf_fio_target 00:09:45.292 ************************************ 00:09:45.292 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:09:45.551 * Looking for test storage... 00:09:45.552 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:45.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.552 --rc genhtml_branch_coverage=1 00:09:45.552 --rc genhtml_function_coverage=1 00:09:45.552 --rc genhtml_legend=1 00:09:45.552 --rc geninfo_all_blocks=1 00:09:45.552 --rc geninfo_unexecuted_blocks=1 00:09:45.552 00:09:45.552 ' 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:45.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.552 --rc genhtml_branch_coverage=1 00:09:45.552 --rc genhtml_function_coverage=1 00:09:45.552 --rc genhtml_legend=1 00:09:45.552 --rc geninfo_all_blocks=1 00:09:45.552 --rc geninfo_unexecuted_blocks=1 00:09:45.552 00:09:45.552 ' 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:45.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.552 --rc genhtml_branch_coverage=1 00:09:45.552 --rc genhtml_function_coverage=1 00:09:45.552 --rc genhtml_legend=1 00:09:45.552 --rc geninfo_all_blocks=1 00:09:45.552 --rc geninfo_unexecuted_blocks=1 00:09:45.552 00:09:45.552 ' 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:45.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.552 --rc genhtml_branch_coverage=1 00:09:45.552 --rc genhtml_function_coverage=1 00:09:45.552 --rc genhtml_legend=1 00:09:45.552 --rc geninfo_all_blocks=1 00:09:45.552 --rc geninfo_unexecuted_blocks=1 00:09:45.552 00:09:45.552 ' 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.552 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.553 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.553 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.553 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:45.553 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.553 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:45.553 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:45.553 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:45.553 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.553 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.553 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.553 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:45.553 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:45.553 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:45.553 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:45.553 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:45.553 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:45.553 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:45.553 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:45.553 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:45.553 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:09:45.553 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:45.553 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:45.553 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:45.553 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:45.553 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.553 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.553 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.553 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:45.553 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:45.553 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:45.553 18:14:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.683 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:53.683 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:53.683 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:53.683 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:53.683 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:53.683 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:53.683 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:53.683 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:53.683 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:53.683 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:53.683 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:53.683 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:53.683 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:53.683 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:53.683 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:53.683 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:53.683 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:53.683 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:53.683 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:53.683 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:53.683 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:53.683 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:53.683 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:53.683 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:53.683 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:53.683 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:09:53.684 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:09:53.684 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:09:53.684 Found net devices under 0000:18:00.0: mlx_0_0 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:09:53.684 Found net devices under 0000:18:00.1: mlx_0_1 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # rdma_device_init 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # uname 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@528 -- # allocate_nic_ips 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:53.684 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:53.684 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:09:53.684 altname enp24s0f0np0 00:09:53.684 altname ens785f0np0 00:09:53.684 inet 192.168.100.8/24 scope global mlx_0_0 00:09:53.684 valid_lft forever preferred_lft forever 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:53.684 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:53.685 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:53.685 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:09:53.685 altname enp24s0f1np1 00:09:53.685 altname ens785f1np1 00:09:53.685 inet 192.168.100.9/24 scope global mlx_0_1 00:09:53.685 valid_lft forever preferred_lft forever 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:09:53.685 192.168.100.9' 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:09:53.685 192.168.100.9' 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # head -n 1 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:09:53.685 192.168.100.9' 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # tail -n +2 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # head -n 1 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=3336967 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 3336967 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 3336967 ']' 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:53.685 18:15:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.685 [2024-10-08 18:15:05.668202] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:09:53.685 [2024-10-08 18:15:05.668271] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.685 [2024-10-08 18:15:05.755826] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:53.685 [2024-10-08 18:15:05.849342] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:53.685 [2024-10-08 18:15:05.849387] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:53.685 [2024-10-08 18:15:05.849397] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:53.685 [2024-10-08 18:15:05.849407] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:53.685 [2024-10-08 18:15:05.849414] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:53.685 [2024-10-08 18:15:05.850738] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.685 [2024-10-08 18:15:05.850841] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:09:53.685 [2024-10-08 18:15:05.850943] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.685 [2024-10-08 18:15:05.850944] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:09:53.685 18:15:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:53.685 18:15:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:09:53.685 18:15:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:53.686 18:15:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:53.686 18:15:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.686 18:15:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:53.686 18:15:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:53.686 [2024-10-08 18:15:06.762854] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x15012e0/0x15057d0) succeed. 00:09:53.686 [2024-10-08 18:15:06.773307] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1502920/0x1546e70) succeed. 00:09:53.945 18:15:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:54.204 18:15:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:54.204 18:15:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:54.204 18:15:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:54.204 18:15:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:54.463 18:15:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:54.464 18:15:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:54.723 18:15:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:54.723 18:15:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:54.982 18:15:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:55.242 18:15:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:55.242 18:15:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:55.500 18:15:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:55.500 18:15:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:55.500 18:15:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:55.500 18:15:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:55.759 18:15:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:56.018 18:15:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:56.018 18:15:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:56.277 18:15:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:56.277 18:15:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:56.536 18:15:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:56.536 [2024-10-08 18:15:09.651033] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:56.536 18:15:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:56.795 18:15:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:57.066 18:15:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:58.053 18:15:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:58.053 18:15:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:09:58.054 18:15:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:58.054 18:15:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:09:58.054 18:15:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:09:58.054 18:15:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:09:59.961 18:15:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:59.961 18:15:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:59.961 18:15:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:00.220 18:15:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:00.220 18:15:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:00.220 18:15:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:00.220 18:15:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:00.220 [global] 00:10:00.220 thread=1 00:10:00.220 invalidate=1 00:10:00.220 rw=write 00:10:00.220 time_based=1 00:10:00.220 runtime=1 00:10:00.220 ioengine=libaio 00:10:00.220 direct=1 00:10:00.220 bs=4096 00:10:00.220 iodepth=1 00:10:00.220 norandommap=0 00:10:00.220 numjobs=1 00:10:00.220 00:10:00.220 verify_dump=1 00:10:00.220 verify_backlog=512 00:10:00.220 verify_state_save=0 00:10:00.220 do_verify=1 00:10:00.220 verify=crc32c-intel 00:10:00.220 [job0] 00:10:00.220 filename=/dev/nvme0n1 00:10:00.220 [job1] 00:10:00.220 filename=/dev/nvme0n2 00:10:00.220 [job2] 00:10:00.220 filename=/dev/nvme0n3 00:10:00.220 [job3] 00:10:00.220 filename=/dev/nvme0n4 00:10:00.220 Could not set queue depth (nvme0n1) 00:10:00.220 Could not set queue depth (nvme0n2) 00:10:00.220 Could not set queue depth (nvme0n3) 00:10:00.220 Could not set queue depth (nvme0n4) 00:10:00.478 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:00.479 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:00.479 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:00.479 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:00.479 fio-3.35 00:10:00.479 Starting 4 threads 00:10:01.868 00:10:01.868 job0: (groupid=0, jobs=1): err= 0: pid=3338263: Tue Oct 8 18:15:14 2024 00:10:01.868 read: IOPS=4267, BW=16.7MiB/s (17.5MB/s)(16.7MiB/1001msec) 00:10:01.868 slat (nsec): min=8285, max=35910, avg=9003.01, stdev=1012.75 00:10:01.868 clat (usec): min=65, max=297, avg=100.33, stdev=21.23 00:10:01.868 lat (usec): min=74, max=306, avg=109.33, stdev=21.28 00:10:01.868 clat percentiles (usec): 00:10:01.868 | 1.00th=[ 72], 5.00th=[ 76], 10.00th=[ 79], 20.00th=[ 84], 00:10:01.868 | 30.00th=[ 87], 40.00th=[ 90], 50.00th=[ 93], 60.00th=[ 98], 00:10:01.868 | 70.00th=[ 115], 80.00th=[ 122], 90.00th=[ 129], 95.00th=[ 135], 00:10:01.868 | 99.00th=[ 157], 99.50th=[ 180], 99.90th=[ 225], 99.95th=[ 251], 00:10:01.868 | 99.99th=[ 297] 00:10:01.868 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:10:01.868 slat (nsec): min=8890, max=38178, avg=11859.30, stdev=1316.52 00:10:01.868 clat (usec): min=64, max=297, avg=98.68, stdev=21.10 00:10:01.868 lat (usec): min=76, max=310, avg=110.54, stdev=21.07 00:10:01.868 clat percentiles (usec): 00:10:01.868 | 1.00th=[ 69], 5.00th=[ 73], 10.00th=[ 77], 20.00th=[ 81], 00:10:01.868 | 30.00th=[ 84], 40.00th=[ 87], 50.00th=[ 92], 60.00th=[ 106], 00:10:01.868 | 70.00th=[ 114], 80.00th=[ 119], 90.00th=[ 125], 95.00th=[ 130], 00:10:01.868 | 99.00th=[ 155], 99.50th=[ 174], 99.90th=[ 221], 99.95th=[ 247], 00:10:01.868 | 99.99th=[ 297] 00:10:01.868 bw ( KiB/s): min=20480, max=20480, per=28.90%, avg=20480.00, stdev= 0.00, samples=1 00:10:01.868 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:10:01.868 lat (usec) : 100=59.03%, 250=40.92%, 500=0.05% 00:10:01.868 cpu : usr=6.00%, sys=9.40%, ctx=8880, majf=0, minf=1 00:10:01.868 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.868 issued rwts: total=4272,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.868 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.868 job1: (groupid=0, jobs=1): err= 0: pid=3338268: Tue Oct 8 18:15:14 2024 00:10:01.868 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:10:01.868 slat (nsec): min=8440, max=34958, avg=9110.30, stdev=1155.62 00:10:01.868 clat (usec): min=68, max=368, avg=104.51, stdev=22.08 00:10:01.868 lat (usec): min=76, max=384, avg=113.62, stdev=22.17 00:10:01.868 clat percentiles (usec): 00:10:01.868 | 1.00th=[ 74], 5.00th=[ 77], 10.00th=[ 78], 20.00th=[ 82], 00:10:01.869 | 30.00th=[ 85], 40.00th=[ 90], 50.00th=[ 111], 60.00th=[ 117], 00:10:01.869 | 70.00th=[ 122], 80.00th=[ 126], 90.00th=[ 131], 95.00th=[ 135], 00:10:01.869 | 99.00th=[ 145], 99.50th=[ 159], 99.90th=[ 176], 99.95th=[ 194], 00:10:01.869 | 99.99th=[ 371] 00:10:01.869 write: IOPS=4457, BW=17.4MiB/s (18.3MB/s)(17.4MiB/1001msec); 0 zone resets 00:10:01.869 slat (nsec): min=10688, max=41067, avg=11918.66, stdev=1249.82 00:10:01.869 clat (usec): min=64, max=972, avg=102.75, stdev=26.38 00:10:01.869 lat (usec): min=76, max=984, avg=114.67, stdev=26.41 00:10:01.869 clat percentiles (usec): 00:10:01.869 | 1.00th=[ 70], 5.00th=[ 73], 10.00th=[ 75], 20.00th=[ 78], 00:10:01.869 | 30.00th=[ 81], 40.00th=[ 98], 50.00th=[ 108], 60.00th=[ 114], 00:10:01.869 | 70.00th=[ 118], 80.00th=[ 121], 90.00th=[ 128], 95.00th=[ 139], 00:10:01.869 | 99.00th=[ 159], 99.50th=[ 165], 99.90th=[ 200], 99.95th=[ 204], 00:10:01.869 | 99.99th=[ 971] 00:10:01.869 bw ( KiB/s): min=16384, max=16384, per=23.12%, avg=16384.00, stdev= 0.00, samples=1 00:10:01.869 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:10:01.869 lat (usec) : 100=43.20%, 250=56.78%, 500=0.01%, 1000=0.01% 00:10:01.869 cpu : usr=4.70%, sys=10.20%, ctx=8558, majf=0, minf=1 00:10:01.869 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.869 issued rwts: total=4096,4462,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.869 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.869 job2: (groupid=0, jobs=1): err= 0: pid=3338269: Tue Oct 8 18:15:14 2024 00:10:01.869 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:10:01.869 slat (nsec): min=8641, max=35033, avg=9229.51, stdev=1216.21 00:10:01.869 clat (usec): min=80, max=371, avg=120.29, stdev=13.95 00:10:01.869 lat (usec): min=89, max=380, avg=129.52, stdev=14.02 00:10:01.869 clat percentiles (usec): 00:10:01.869 | 1.00th=[ 89], 5.00th=[ 96], 10.00th=[ 102], 20.00th=[ 112], 00:10:01.869 | 30.00th=[ 117], 40.00th=[ 119], 50.00th=[ 122], 60.00th=[ 124], 00:10:01.869 | 70.00th=[ 127], 80.00th=[ 130], 90.00th=[ 135], 95.00th=[ 139], 00:10:01.869 | 99.00th=[ 159], 99.50th=[ 169], 99.90th=[ 198], 99.95th=[ 221], 00:10:01.869 | 99.99th=[ 371] 00:10:01.869 write: IOPS=4053, BW=15.8MiB/s (16.6MB/s)(15.9MiB/1001msec); 0 zone resets 00:10:01.869 slat (nsec): min=10846, max=48351, avg=11959.61, stdev=1583.34 00:10:01.869 clat (usec): min=74, max=209, avg=115.21, stdev=14.12 00:10:01.869 lat (usec): min=86, max=221, avg=127.17, stdev=14.14 00:10:01.869 clat percentiles (usec): 00:10:01.869 | 1.00th=[ 82], 5.00th=[ 93], 10.00th=[ 100], 20.00th=[ 106], 00:10:01.869 | 30.00th=[ 111], 40.00th=[ 113], 50.00th=[ 116], 60.00th=[ 118], 00:10:01.869 | 70.00th=[ 121], 80.00th=[ 124], 90.00th=[ 130], 95.00th=[ 145], 00:10:01.869 | 99.00th=[ 159], 99.50th=[ 163], 99.90th=[ 184], 99.95th=[ 186], 00:10:01.869 | 99.99th=[ 210] 00:10:01.869 bw ( KiB/s): min=16384, max=16384, per=23.12%, avg=16384.00, stdev= 0.00, samples=1 00:10:01.869 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:10:01.869 lat (usec) : 100=10.17%, 250=89.82%, 500=0.01% 00:10:01.869 cpu : usr=4.90%, sys=8.50%, ctx=7642, majf=0, minf=1 00:10:01.869 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.869 issued rwts: total=3584,4058,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.869 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.869 job3: (groupid=0, jobs=1): err= 0: pid=3338270: Tue Oct 8 18:15:14 2024 00:10:01.869 read: IOPS=4495, BW=17.6MiB/s (18.4MB/s)(17.6MiB/1001msec) 00:10:01.869 slat (nsec): min=8771, max=29646, avg=9361.24, stdev=855.13 00:10:01.869 clat (usec): min=75, max=201, avg=99.29, stdev=16.47 00:10:01.869 lat (usec): min=85, max=211, avg=108.65, stdev=16.52 00:10:01.869 clat percentiles (usec): 00:10:01.869 | 1.00th=[ 81], 5.00th=[ 85], 10.00th=[ 87], 20.00th=[ 89], 00:10:01.869 | 30.00th=[ 90], 40.00th=[ 92], 50.00th=[ 94], 60.00th=[ 96], 00:10:01.869 | 70.00th=[ 99], 80.00th=[ 110], 90.00th=[ 123], 95.00th=[ 135], 00:10:01.869 | 99.00th=[ 157], 99.50th=[ 163], 99.90th=[ 184], 99.95th=[ 186], 00:10:01.869 | 99.99th=[ 202] 00:10:01.869 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:10:01.869 slat (nsec): min=10953, max=42838, avg=12132.90, stdev=1269.80 00:10:01.869 clat (usec): min=71, max=167, avg=93.33, stdev=13.72 00:10:01.869 lat (usec): min=83, max=179, avg=105.46, stdev=13.75 00:10:01.869 clat percentiles (usec): 00:10:01.869 | 1.00th=[ 77], 5.00th=[ 80], 10.00th=[ 81], 20.00th=[ 84], 00:10:01.869 | 30.00th=[ 85], 40.00th=[ 87], 50.00th=[ 89], 60.00th=[ 91], 00:10:01.869 | 70.00th=[ 94], 80.00th=[ 104], 90.00th=[ 118], 95.00th=[ 123], 00:10:01.869 | 99.00th=[ 133], 99.50th=[ 135], 99.90th=[ 157], 99.95th=[ 159], 00:10:01.869 | 99.99th=[ 167] 00:10:01.869 bw ( KiB/s): min=20480, max=20480, per=28.90%, avg=20480.00, stdev= 0.00, samples=1 00:10:01.869 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:10:01.869 lat (usec) : 100=75.12%, 250=24.88% 00:10:01.869 cpu : usr=5.70%, sys=10.40%, ctx=9108, majf=0, minf=1 00:10:01.869 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.869 issued rwts: total=4500,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.869 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.869 00:10:01.869 Run status group 0 (all jobs): 00:10:01.869 READ: bw=64.2MiB/s (67.3MB/s), 14.0MiB/s-17.6MiB/s (14.7MB/s-18.4MB/s), io=64.3MiB (67.4MB), run=1001-1001msec 00:10:01.869 WRITE: bw=69.2MiB/s (72.6MB/s), 15.8MiB/s-18.0MiB/s (16.6MB/s-18.9MB/s), io=69.3MiB (72.6MB), run=1001-1001msec 00:10:01.869 00:10:01.869 Disk stats (read/write): 00:10:01.869 nvme0n1: ios=3634/3759, merge=0/0, ticks=382/345, in_queue=727, util=85.97% 00:10:01.869 nvme0n2: ios=3584/3756, merge=0/0, ticks=356/353, in_queue=709, util=86.46% 00:10:01.869 nvme0n3: ios=3072/3340, merge=0/0, ticks=355/360, in_queue=715, util=88.90% 00:10:01.869 nvme0n4: ios=3835/4096, merge=0/0, ticks=356/345, in_queue=701, util=89.65% 00:10:01.869 18:15:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:01.869 [global] 00:10:01.869 thread=1 00:10:01.869 invalidate=1 00:10:01.869 rw=randwrite 00:10:01.869 time_based=1 00:10:01.869 runtime=1 00:10:01.869 ioengine=libaio 00:10:01.869 direct=1 00:10:01.869 bs=4096 00:10:01.869 iodepth=1 00:10:01.869 norandommap=0 00:10:01.869 numjobs=1 00:10:01.869 00:10:01.869 verify_dump=1 00:10:01.869 verify_backlog=512 00:10:01.869 verify_state_save=0 00:10:01.869 do_verify=1 00:10:01.869 verify=crc32c-intel 00:10:01.869 [job0] 00:10:01.869 filename=/dev/nvme0n1 00:10:01.869 [job1] 00:10:01.869 filename=/dev/nvme0n2 00:10:01.869 [job2] 00:10:01.869 filename=/dev/nvme0n3 00:10:01.869 [job3] 00:10:01.869 filename=/dev/nvme0n4 00:10:01.869 Could not set queue depth (nvme0n1) 00:10:01.869 Could not set queue depth (nvme0n2) 00:10:01.869 Could not set queue depth (nvme0n3) 00:10:01.869 Could not set queue depth (nvme0n4) 00:10:02.127 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.127 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.127 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.127 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.127 fio-3.35 00:10:02.127 Starting 4 threads 00:10:03.497 00:10:03.497 job0: (groupid=0, jobs=1): err= 0: pid=3338564: Tue Oct 8 18:15:16 2024 00:10:03.497 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:10:03.497 slat (nsec): min=8346, max=97029, avg=8909.31, stdev=1595.08 00:10:03.497 clat (usec): min=7, max=119, avg=82.67, stdev= 7.07 00:10:03.497 lat (usec): min=75, max=127, avg=91.58, stdev= 7.12 00:10:03.497 clat percentiles (usec): 00:10:03.497 | 1.00th=[ 71], 5.00th=[ 74], 10.00th=[ 75], 20.00th=[ 77], 00:10:03.497 | 30.00th=[ 79], 40.00th=[ 81], 50.00th=[ 82], 60.00th=[ 84], 00:10:03.497 | 70.00th=[ 87], 80.00th=[ 89], 90.00th=[ 93], 95.00th=[ 95], 00:10:03.497 | 99.00th=[ 101], 99.50th=[ 104], 99.90th=[ 112], 99.95th=[ 113], 00:10:03.497 | 99.99th=[ 120] 00:10:03.497 write: IOPS=5487, BW=21.4MiB/s (22.5MB/s)(21.5MiB/1001msec); 0 zone resets 00:10:03.497 slat (nsec): min=10421, max=42984, avg=11463.40, stdev=1341.03 00:10:03.497 clat (usec): min=60, max=193, avg=79.91, stdev=11.54 00:10:03.497 lat (usec): min=71, max=204, avg=91.37, stdev=11.74 00:10:03.497 clat percentiles (usec): 00:10:03.497 | 1.00th=[ 67], 5.00th=[ 69], 10.00th=[ 71], 20.00th=[ 73], 00:10:03.497 | 30.00th=[ 75], 40.00th=[ 77], 50.00th=[ 78], 60.00th=[ 80], 00:10:03.497 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 90], 95.00th=[ 98], 00:10:03.497 | 99.00th=[ 129], 99.50th=[ 139], 99.90th=[ 163], 99.95th=[ 172], 00:10:03.497 | 99.99th=[ 194] 00:10:03.497 bw ( KiB/s): min=20480, max=20480, per=29.22%, avg=20480.00, stdev= 0.00, samples=1 00:10:03.497 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:10:03.497 lat (usec) : 10=0.01%, 100=96.92%, 250=3.07% 00:10:03.497 cpu : usr=7.00%, sys=11.30%, ctx=10613, majf=0, minf=1 00:10:03.497 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:03.497 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.497 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.498 issued rwts: total=5120,5493,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.498 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:03.498 job1: (groupid=0, jobs=1): err= 0: pid=3338565: Tue Oct 8 18:15:16 2024 00:10:03.498 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:10:03.498 slat (nsec): min=8411, max=26917, avg=9177.95, stdev=1346.37 00:10:03.498 clat (usec): min=75, max=234, avg=128.74, stdev=12.57 00:10:03.498 lat (usec): min=84, max=243, avg=137.92, stdev=12.56 00:10:03.498 clat percentiles (usec): 00:10:03.498 | 1.00th=[ 103], 5.00th=[ 111], 10.00th=[ 114], 20.00th=[ 120], 00:10:03.498 | 30.00th=[ 123], 40.00th=[ 126], 50.00th=[ 129], 60.00th=[ 131], 00:10:03.498 | 70.00th=[ 135], 80.00th=[ 139], 90.00th=[ 145], 95.00th=[ 149], 00:10:03.498 | 99.00th=[ 163], 99.50th=[ 169], 99.90th=[ 200], 99.95th=[ 215], 00:10:03.498 | 99.99th=[ 235] 00:10:03.498 write: IOPS=3699, BW=14.5MiB/s (15.2MB/s)(14.5MiB/1001msec); 0 zone resets 00:10:03.498 slat (nsec): min=10299, max=42584, avg=11596.16, stdev=1633.71 00:10:03.498 clat (usec): min=72, max=396, avg=119.77, stdev=14.41 00:10:03.498 lat (usec): min=83, max=407, avg=131.37, stdev=14.42 00:10:03.498 clat percentiles (usec): 00:10:03.498 | 1.00th=[ 86], 5.00th=[ 101], 10.00th=[ 105], 20.00th=[ 111], 00:10:03.498 | 30.00th=[ 114], 40.00th=[ 117], 50.00th=[ 120], 60.00th=[ 123], 00:10:03.498 | 70.00th=[ 126], 80.00th=[ 129], 90.00th=[ 135], 95.00th=[ 141], 00:10:03.498 | 99.00th=[ 155], 99.50th=[ 165], 99.90th=[ 253], 99.95th=[ 255], 00:10:03.498 | 99.99th=[ 396] 00:10:03.498 bw ( KiB/s): min=16120, max=16120, per=23.00%, avg=16120.00, stdev= 0.00, samples=1 00:10:03.498 iops : min= 4030, max= 4030, avg=4030.00, stdev= 0.00, samples=1 00:10:03.498 lat (usec) : 100=2.57%, 250=97.38%, 500=0.05% 00:10:03.498 cpu : usr=4.40%, sys=8.20%, ctx=7287, majf=0, minf=1 00:10:03.498 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:03.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.498 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.498 issued rwts: total=3584,3703,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.498 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:03.498 job2: (groupid=0, jobs=1): err= 0: pid=3338571: Tue Oct 8 18:15:16 2024 00:10:03.498 read: IOPS=4300, BW=16.8MiB/s (17.6MB/s)(16.8MiB/1001msec) 00:10:03.498 slat (nsec): min=8581, max=30260, avg=9229.98, stdev=926.28 00:10:03.498 clat (usec): min=76, max=332, avg=101.91, stdev= 9.20 00:10:03.498 lat (usec): min=85, max=341, avg=111.14, stdev= 9.18 00:10:03.498 clat percentiles (usec): 00:10:03.498 | 1.00th=[ 85], 5.00th=[ 89], 10.00th=[ 91], 20.00th=[ 95], 00:10:03.498 | 30.00th=[ 97], 40.00th=[ 100], 50.00th=[ 102], 60.00th=[ 104], 00:10:03.498 | 70.00th=[ 106], 80.00th=[ 110], 90.00th=[ 114], 95.00th=[ 117], 00:10:03.498 | 99.00th=[ 124], 99.50th=[ 126], 99.90th=[ 133], 99.95th=[ 153], 00:10:03.498 | 99.99th=[ 334] 00:10:03.498 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:10:03.498 slat (nsec): min=10592, max=38245, avg=11667.21, stdev=1161.45 00:10:03.498 clat (usec): min=75, max=280, avg=96.37, stdev= 8.80 00:10:03.498 lat (usec): min=86, max=292, avg=108.03, stdev= 8.84 00:10:03.498 clat percentiles (usec): 00:10:03.498 | 1.00th=[ 81], 5.00th=[ 84], 10.00th=[ 86], 20.00th=[ 89], 00:10:03.498 | 30.00th=[ 92], 40.00th=[ 94], 50.00th=[ 96], 60.00th=[ 98], 00:10:03.498 | 70.00th=[ 100], 80.00th=[ 103], 90.00th=[ 109], 95.00th=[ 112], 00:10:03.498 | 99.00th=[ 119], 99.50th=[ 122], 99.90th=[ 126], 99.95th=[ 128], 00:10:03.498 | 99.99th=[ 281] 00:10:03.498 bw ( KiB/s): min=20344, max=20344, per=29.03%, avg=20344.00, stdev= 0.00, samples=1 00:10:03.498 iops : min= 5086, max= 5086, avg=5086.00, stdev= 0.00, samples=1 00:10:03.498 lat (usec) : 100=56.01%, 250=43.97%, 500=0.02% 00:10:03.498 cpu : usr=5.50%, sys=10.00%, ctx=8913, majf=0, minf=1 00:10:03.498 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:03.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.498 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.498 issued rwts: total=4305,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.498 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:03.498 job3: (groupid=0, jobs=1): err= 0: pid=3338572: Tue Oct 8 18:15:16 2024 00:10:03.498 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:10:03.498 slat (nsec): min=8691, max=31582, avg=9408.65, stdev=1040.77 00:10:03.498 clat (usec): min=81, max=206, avg=128.61, stdev=11.29 00:10:03.498 lat (usec): min=91, max=216, avg=138.02, stdev=11.29 00:10:03.498 clat percentiles (usec): 00:10:03.498 | 1.00th=[ 105], 5.00th=[ 113], 10.00th=[ 117], 20.00th=[ 120], 00:10:03.498 | 30.00th=[ 123], 40.00th=[ 126], 50.00th=[ 128], 60.00th=[ 131], 00:10:03.498 | 70.00th=[ 135], 80.00th=[ 139], 90.00th=[ 143], 95.00th=[ 149], 00:10:03.498 | 99.00th=[ 159], 99.50th=[ 165], 99.90th=[ 188], 99.95th=[ 202], 00:10:03.498 | 99.99th=[ 208] 00:10:03.498 write: IOPS=3729, BW=14.6MiB/s (15.3MB/s)(14.6MiB/1001msec); 0 zone resets 00:10:03.498 slat (nsec): min=10476, max=38918, avg=11725.38, stdev=1256.59 00:10:03.498 clat (usec): min=75, max=391, avg=118.65, stdev=13.72 00:10:03.498 lat (usec): min=87, max=402, avg=130.37, stdev=13.70 00:10:03.498 clat percentiles (usec): 00:10:03.498 | 1.00th=[ 90], 5.00th=[ 98], 10.00th=[ 104], 20.00th=[ 110], 00:10:03.498 | 30.00th=[ 114], 40.00th=[ 116], 50.00th=[ 119], 60.00th=[ 121], 00:10:03.498 | 70.00th=[ 124], 80.00th=[ 128], 90.00th=[ 135], 95.00th=[ 139], 00:10:03.498 | 99.00th=[ 151], 99.50th=[ 157], 99.90th=[ 273], 99.95th=[ 285], 00:10:03.498 | 99.99th=[ 392] 00:10:03.498 bw ( KiB/s): min=16352, max=16352, per=23.33%, avg=16352.00, stdev= 0.00, samples=1 00:10:03.498 iops : min= 4088, max= 4088, avg=4088.00, stdev= 0.00, samples=1 00:10:03.498 lat (usec) : 100=3.38%, 250=96.57%, 500=0.05% 00:10:03.498 cpu : usr=4.60%, sys=8.10%, ctx=7317, majf=0, minf=1 00:10:03.498 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:03.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.498 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.498 issued rwts: total=3584,3733,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.498 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:03.498 00:10:03.498 Run status group 0 (all jobs): 00:10:03.498 READ: bw=64.8MiB/s (67.9MB/s), 14.0MiB/s-20.0MiB/s (14.7MB/s-20.9MB/s), io=64.8MiB (68.0MB), run=1001-1001msec 00:10:03.498 WRITE: bw=68.4MiB/s (71.8MB/s), 14.5MiB/s-21.4MiB/s (15.2MB/s-22.5MB/s), io=68.5MiB (71.8MB), run=1001-1001msec 00:10:03.498 00:10:03.498 Disk stats (read/write): 00:10:03.498 nvme0n1: ios=4402/4608, merge=0/0, ticks=342/338, in_queue=680, util=86.37% 00:10:03.498 nvme0n2: ios=3072/3080, merge=0/0, ticks=390/354, in_queue=744, util=86.38% 00:10:03.498 nvme0n3: ios=3584/4031, merge=0/0, ticks=331/379, in_queue=710, util=88.82% 00:10:03.498 nvme0n4: ios=3072/3110, merge=0/0, ticks=383/345, in_queue=728, util=89.66% 00:10:03.498 18:15:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:03.498 [global] 00:10:03.498 thread=1 00:10:03.498 invalidate=1 00:10:03.498 rw=write 00:10:03.498 time_based=1 00:10:03.498 runtime=1 00:10:03.498 ioengine=libaio 00:10:03.498 direct=1 00:10:03.498 bs=4096 00:10:03.498 iodepth=128 00:10:03.498 norandommap=0 00:10:03.498 numjobs=1 00:10:03.498 00:10:03.498 verify_dump=1 00:10:03.498 verify_backlog=512 00:10:03.498 verify_state_save=0 00:10:03.498 do_verify=1 00:10:03.498 verify=crc32c-intel 00:10:03.498 [job0] 00:10:03.498 filename=/dev/nvme0n1 00:10:03.498 [job1] 00:10:03.498 filename=/dev/nvme0n2 00:10:03.498 [job2] 00:10:03.498 filename=/dev/nvme0n3 00:10:03.498 [job3] 00:10:03.498 filename=/dev/nvme0n4 00:10:03.498 Could not set queue depth (nvme0n1) 00:10:03.498 Could not set queue depth (nvme0n2) 00:10:03.498 Could not set queue depth (nvme0n3) 00:10:03.498 Could not set queue depth (nvme0n4) 00:10:03.498 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:03.498 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:03.498 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:03.498 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:03.498 fio-3.35 00:10:03.498 Starting 4 threads 00:10:04.869 00:10:04.869 job0: (groupid=0, jobs=1): err= 0: pid=3338871: Tue Oct 8 18:15:17 2024 00:10:04.869 read: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec) 00:10:04.869 slat (usec): min=2, max=5195, avg=71.16, stdev=319.07 00:10:04.869 clat (usec): min=2955, max=17643, avg=9337.48, stdev=3404.29 00:10:04.869 lat (usec): min=2973, max=17653, avg=9408.64, stdev=3419.94 00:10:04.869 clat percentiles (usec): 00:10:04.869 | 1.00th=[ 4228], 5.00th=[ 5407], 10.00th=[ 6259], 20.00th=[ 6587], 00:10:04.869 | 30.00th=[ 6849], 40.00th=[ 7242], 50.00th=[ 7898], 60.00th=[ 8848], 00:10:04.869 | 70.00th=[11076], 80.00th=[13566], 90.00th=[14746], 95.00th=[15401], 00:10:04.869 | 99.00th=[16712], 99.50th=[16909], 99.90th=[17433], 99.95th=[17695], 00:10:04.869 | 99.99th=[17695] 00:10:04.869 write: IOPS=6934, BW=27.1MiB/s (28.4MB/s)(27.2MiB/1003msec); 0 zone resets 00:10:04.869 slat (usec): min=2, max=4637, avg=71.54, stdev=314.21 00:10:04.869 clat (usec): min=1733, max=19004, avg=9331.75, stdev=3408.73 00:10:04.869 lat (usec): min=2469, max=19884, avg=9403.29, stdev=3429.16 00:10:04.869 clat percentiles (usec): 00:10:04.869 | 1.00th=[ 4047], 5.00th=[ 5342], 10.00th=[ 5997], 20.00th=[ 6325], 00:10:04.869 | 30.00th=[ 6587], 40.00th=[ 7177], 50.00th=[ 8160], 60.00th=[10028], 00:10:04.869 | 70.00th=[11469], 80.00th=[12649], 90.00th=[14222], 95.00th=[15401], 00:10:04.869 | 99.00th=[17433], 99.50th=[18220], 99.90th=[19006], 99.95th=[19006], 00:10:04.869 | 99.99th=[19006] 00:10:04.869 bw ( KiB/s): min=24576, max=30048, per=25.93%, avg=27312.00, stdev=3869.29, samples=2 00:10:04.869 iops : min= 6144, max= 7512, avg=6828.00, stdev=967.32, samples=2 00:10:04.869 lat (msec) : 2=0.01%, 4=0.53%, 10=62.37%, 20=37.09% 00:10:04.869 cpu : usr=3.29%, sys=6.19%, ctx=1196, majf=0, minf=1 00:10:04.869 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:04.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:04.869 issued rwts: total=6656,6955,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.869 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:04.869 job1: (groupid=0, jobs=1): err= 0: pid=3338872: Tue Oct 8 18:15:17 2024 00:10:04.869 read: IOPS=6306, BW=24.6MiB/s (25.8MB/s)(24.7MiB/1003msec) 00:10:04.869 slat (usec): min=2, max=7027, avg=80.22, stdev=375.33 00:10:04.869 clat (usec): min=1926, max=18285, avg=10290.65, stdev=3138.56 00:10:04.869 lat (usec): min=2956, max=19265, avg=10370.87, stdev=3149.70 00:10:04.869 clat percentiles (usec): 00:10:04.869 | 1.00th=[ 4948], 5.00th=[ 5932], 10.00th=[ 6521], 20.00th=[ 7308], 00:10:04.869 | 30.00th=[ 8160], 40.00th=[ 8979], 50.00th=[10028], 60.00th=[10945], 00:10:04.869 | 70.00th=[12125], 80.00th=[13435], 90.00th=[14484], 95.00th=[15795], 00:10:04.869 | 99.00th=[17695], 99.50th=[17957], 99.90th=[17957], 99.95th=[18220], 00:10:04.869 | 99.99th=[18220] 00:10:04.869 write: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec); 0 zone resets 00:10:04.869 slat (usec): min=2, max=4414, avg=69.88, stdev=305.90 00:10:04.869 clat (usec): min=2815, max=20040, avg=9297.07, stdev=3443.95 00:10:04.870 lat (usec): min=2819, max=20186, avg=9366.95, stdev=3461.19 00:10:04.870 clat percentiles (usec): 00:10:04.870 | 1.00th=[ 3851], 5.00th=[ 5014], 10.00th=[ 5669], 20.00th=[ 6587], 00:10:04.870 | 30.00th=[ 7046], 40.00th=[ 7635], 50.00th=[ 8455], 60.00th=[ 9503], 00:10:04.870 | 70.00th=[10683], 80.00th=[11863], 90.00th=[14484], 95.00th=[16712], 00:10:04.870 | 99.00th=[19006], 99.50th=[19006], 99.90th=[19530], 99.95th=[19530], 00:10:04.870 | 99.99th=[20055] 00:10:04.870 bw ( KiB/s): min=24576, max=28672, per=25.28%, avg=26624.00, stdev=2896.31, samples=2 00:10:04.870 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:10:04.870 lat (msec) : 2=0.01%, 4=1.24%, 10=56.91%, 20=41.82%, 50=0.02% 00:10:04.870 cpu : usr=3.19%, sys=5.69%, ctx=1138, majf=0, minf=1 00:10:04.870 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:04.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:04.870 issued rwts: total=6325,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.870 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:04.870 job2: (groupid=0, jobs=1): err= 0: pid=3338874: Tue Oct 8 18:15:17 2024 00:10:04.870 read: IOPS=6277, BW=24.5MiB/s (25.7MB/s)(24.6MiB/1003msec) 00:10:04.870 slat (usec): min=2, max=5355, avg=77.67, stdev=382.55 00:10:04.870 clat (usec): min=1980, max=19557, avg=10188.50, stdev=2995.56 00:10:04.870 lat (usec): min=2418, max=19561, avg=10266.17, stdev=3010.50 00:10:04.870 clat percentiles (usec): 00:10:04.870 | 1.00th=[ 4817], 5.00th=[ 6063], 10.00th=[ 6521], 20.00th=[ 7504], 00:10:04.870 | 30.00th=[ 8225], 40.00th=[ 8848], 50.00th=[ 9765], 60.00th=[10683], 00:10:04.870 | 70.00th=[11731], 80.00th=[12780], 90.00th=[14484], 95.00th=[15795], 00:10:04.870 | 99.00th=[17695], 99.50th=[17957], 99.90th=[18220], 99.95th=[18220], 00:10:04.870 | 99.99th=[19530] 00:10:04.870 write: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec); 0 zone resets 00:10:04.870 slat (usec): min=2, max=5268, avg=72.42, stdev=350.58 00:10:04.870 clat (usec): min=3374, max=18837, avg=9421.10, stdev=2645.40 00:10:04.870 lat (usec): min=3384, max=18843, avg=9493.52, stdev=2662.02 00:10:04.870 clat percentiles (usec): 00:10:04.870 | 1.00th=[ 5473], 5.00th=[ 6128], 10.00th=[ 6587], 20.00th=[ 7177], 00:10:04.870 | 30.00th=[ 7767], 40.00th=[ 8160], 50.00th=[ 8717], 60.00th=[ 9241], 00:10:04.870 | 70.00th=[10290], 80.00th=[11863], 90.00th=[13698], 95.00th=[14615], 00:10:04.870 | 99.00th=[15664], 99.50th=[16909], 99.90th=[17433], 99.95th=[18220], 00:10:04.870 | 99.99th=[18744] 00:10:04.870 bw ( KiB/s): min=24776, max=28472, per=25.28%, avg=26624.00, stdev=2613.47, samples=2 00:10:04.870 iops : min= 6194, max= 7118, avg=6656.00, stdev=653.37, samples=2 00:10:04.870 lat (msec) : 2=0.01%, 4=0.23%, 10=59.72%, 20=40.04% 00:10:04.870 cpu : usr=3.49%, sys=5.89%, ctx=1073, majf=0, minf=1 00:10:04.870 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:04.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:04.870 issued rwts: total=6296,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.870 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:04.870 job3: (groupid=0, jobs=1): err= 0: pid=3338875: Tue Oct 8 18:15:17 2024 00:10:04.870 read: IOPS=5753, BW=22.5MiB/s (23.6MB/s)(22.5MiB/1003msec) 00:10:04.870 slat (usec): min=2, max=6090, avg=81.90, stdev=374.56 00:10:04.870 clat (usec): min=1875, max=23303, avg=10789.17, stdev=2975.03 00:10:04.870 lat (usec): min=3848, max=23307, avg=10871.07, stdev=2984.47 00:10:04.870 clat percentiles (usec): 00:10:04.870 | 1.00th=[ 5473], 5.00th=[ 6849], 10.00th=[ 7570], 20.00th=[ 8455], 00:10:04.870 | 30.00th=[ 8979], 40.00th=[ 9634], 50.00th=[10159], 60.00th=[10945], 00:10:04.870 | 70.00th=[11731], 80.00th=[13173], 90.00th=[15139], 95.00th=[16188], 00:10:04.870 | 99.00th=[19530], 99.50th=[21627], 99.90th=[23200], 99.95th=[23200], 00:10:04.870 | 99.99th=[23200] 00:10:04.870 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:10:04.870 slat (usec): min=2, max=4397, avg=81.51, stdev=351.65 00:10:04.870 clat (usec): min=5119, max=19965, avg=10496.15, stdev=3107.19 00:10:04.870 lat (usec): min=5129, max=21409, avg=10577.66, stdev=3123.52 00:10:04.870 clat percentiles (usec): 00:10:04.870 | 1.00th=[ 5604], 5.00th=[ 6783], 10.00th=[ 7373], 20.00th=[ 7963], 00:10:04.870 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9503], 60.00th=[10421], 00:10:04.870 | 70.00th=[11469], 80.00th=[13435], 90.00th=[15533], 95.00th=[16909], 00:10:04.870 | 99.00th=[18482], 99.50th=[20055], 99.90th=[20055], 99.95th=[20055], 00:10:04.870 | 99.99th=[20055] 00:10:04.870 bw ( KiB/s): min=23664, max=25488, per=23.33%, avg=24576.00, stdev=1289.76, samples=2 00:10:04.870 iops : min= 5916, max= 6372, avg=6144.00, stdev=322.44, samples=2 00:10:04.870 lat (msec) : 2=0.01%, 4=0.11%, 10=51.73%, 20=47.77%, 50=0.38% 00:10:04.870 cpu : usr=3.59%, sys=4.99%, ctx=1071, majf=0, minf=1 00:10:04.870 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:04.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:04.870 issued rwts: total=5771,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.870 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:04.870 00:10:04.870 Run status group 0 (all jobs): 00:10:04.870 READ: bw=97.6MiB/s (102MB/s), 22.5MiB/s-25.9MiB/s (23.6MB/s-27.2MB/s), io=97.8MiB (103MB), run=1003-1003msec 00:10:04.870 WRITE: bw=103MiB/s (108MB/s), 23.9MiB/s-27.1MiB/s (25.1MB/s-28.4MB/s), io=103MiB (108MB), run=1003-1003msec 00:10:04.870 00:10:04.870 Disk stats (read/write): 00:10:04.870 nvme0n1: ios=5697/6144, merge=0/0, ticks=14196/16506, in_queue=30702, util=83.97% 00:10:04.870 nvme0n2: ios=5632/5692, merge=0/0, ticks=17619/14368, in_queue=31987, util=84.37% 00:10:04.870 nvme0n3: ios=5009/5120, merge=0/0, ticks=18569/16739, in_queue=35308, util=88.09% 00:10:04.870 nvme0n4: ios=4608/5101, merge=0/0, ticks=15640/16427, in_queue=32067, util=88.46% 00:10:04.870 18:15:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:04.870 [global] 00:10:04.870 thread=1 00:10:04.870 invalidate=1 00:10:04.870 rw=randwrite 00:10:04.870 time_based=1 00:10:04.870 runtime=1 00:10:04.870 ioengine=libaio 00:10:04.870 direct=1 00:10:04.870 bs=4096 00:10:04.870 iodepth=128 00:10:04.870 norandommap=0 00:10:04.870 numjobs=1 00:10:04.870 00:10:04.870 verify_dump=1 00:10:04.870 verify_backlog=512 00:10:04.870 verify_state_save=0 00:10:04.870 do_verify=1 00:10:04.870 verify=crc32c-intel 00:10:04.870 [job0] 00:10:04.870 filename=/dev/nvme0n1 00:10:04.870 [job1] 00:10:04.870 filename=/dev/nvme0n2 00:10:04.870 [job2] 00:10:04.870 filename=/dev/nvme0n3 00:10:04.870 [job3] 00:10:04.870 filename=/dev/nvme0n4 00:10:04.870 Could not set queue depth (nvme0n1) 00:10:04.870 Could not set queue depth (nvme0n2) 00:10:04.870 Could not set queue depth (nvme0n3) 00:10:04.870 Could not set queue depth (nvme0n4) 00:10:05.128 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:05.128 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:05.128 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:05.128 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:05.128 fio-3.35 00:10:05.128 Starting 4 threads 00:10:06.500 00:10:06.500 job0: (groupid=0, jobs=1): err= 0: pid=3339175: Tue Oct 8 18:15:19 2024 00:10:06.500 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:10:06.500 slat (usec): min=2, max=5477, avg=84.16, stdev=391.43 00:10:06.500 clat (usec): min=3499, max=20840, avg=11174.90, stdev=3848.90 00:10:06.500 lat (usec): min=3502, max=23373, avg=11259.07, stdev=3869.50 00:10:06.500 clat percentiles (usec): 00:10:06.500 | 1.00th=[ 4359], 5.00th=[ 5538], 10.00th=[ 6521], 20.00th=[ 7832], 00:10:06.500 | 30.00th=[ 8717], 40.00th=[ 9765], 50.00th=[10683], 60.00th=[11863], 00:10:06.500 | 70.00th=[12911], 80.00th=[14746], 90.00th=[17171], 95.00th=[18220], 00:10:06.500 | 99.00th=[19530], 99.50th=[20579], 99.90th=[20841], 99.95th=[20841], 00:10:06.500 | 99.99th=[20841] 00:10:06.500 write: IOPS=5740, BW=22.4MiB/s (23.5MB/s)(22.5MiB/1003msec); 0 zone resets 00:10:06.500 slat (usec): min=2, max=6313, avg=86.38, stdev=373.76 00:10:06.500 clat (usec): min=2270, max=23056, avg=11144.51, stdev=4765.15 00:10:06.500 lat (usec): min=2345, max=23068, avg=11230.89, stdev=4795.30 00:10:06.500 clat percentiles (usec): 00:10:06.500 | 1.00th=[ 3851], 5.00th=[ 4490], 10.00th=[ 4948], 20.00th=[ 6521], 00:10:06.500 | 30.00th=[ 8094], 40.00th=[ 9241], 50.00th=[10683], 60.00th=[11994], 00:10:06.500 | 70.00th=[13829], 80.00th=[15664], 90.00th=[17957], 95.00th=[20055], 00:10:06.500 | 99.00th=[22152], 99.50th=[22676], 99.90th=[22938], 99.95th=[22938], 00:10:06.500 | 99.99th=[22938] 00:10:06.500 bw ( KiB/s): min=20480, max=24576, per=24.92%, avg=22528.00, stdev=2896.31, samples=2 00:10:06.500 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:10:06.500 lat (msec) : 4=0.93%, 10=43.27%, 20=52.71%, 50=3.09% 00:10:06.500 cpu : usr=3.29%, sys=5.79%, ctx=1262, majf=0, minf=1 00:10:06.500 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:06.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.500 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:06.500 issued rwts: total=5632,5758,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.500 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:06.500 job1: (groupid=0, jobs=1): err= 0: pid=3339176: Tue Oct 8 18:15:19 2024 00:10:06.500 read: IOPS=4934, BW=19.3MiB/s (20.2MB/s)(19.3MiB/1002msec) 00:10:06.500 slat (usec): min=2, max=5382, avg=94.06, stdev=428.80 00:10:06.500 clat (usec): min=632, max=23213, avg=12444.31, stdev=3891.82 00:10:06.500 lat (usec): min=3364, max=23224, avg=12538.37, stdev=3903.33 00:10:06.500 clat percentiles (usec): 00:10:06.500 | 1.00th=[ 5669], 5.00th=[ 6456], 10.00th=[ 7046], 20.00th=[ 8979], 00:10:06.500 | 30.00th=[10421], 40.00th=[11338], 50.00th=[12387], 60.00th=[13566], 00:10:06.500 | 70.00th=[14222], 80.00th=[15401], 90.00th=[17171], 95.00th=[20317], 00:10:06.500 | 99.00th=[22414], 99.50th=[22676], 99.90th=[23200], 99.95th=[23200], 00:10:06.500 | 99.99th=[23200] 00:10:06.501 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:10:06.501 slat (usec): min=2, max=5339, avg=99.73, stdev=424.85 00:10:06.501 clat (usec): min=2709, max=22761, avg=12721.78, stdev=4273.46 00:10:06.501 lat (usec): min=2713, max=25156, avg=12821.51, stdev=4292.48 00:10:06.501 clat percentiles (usec): 00:10:06.501 | 1.00th=[ 4293], 5.00th=[ 6063], 10.00th=[ 6783], 20.00th=[ 8356], 00:10:06.501 | 30.00th=[10290], 40.00th=[12125], 50.00th=[12911], 60.00th=[13698], 00:10:06.501 | 70.00th=[14615], 80.00th=[16712], 90.00th=[18744], 95.00th=[19792], 00:10:06.501 | 99.00th=[21365], 99.50th=[21890], 99.90th=[22676], 99.95th=[22676], 00:10:06.501 | 99.99th=[22676] 00:10:06.501 bw ( KiB/s): min=16384, max=24576, per=22.65%, avg=20480.00, stdev=5792.62, samples=2 00:10:06.501 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:10:06.501 lat (usec) : 750=0.01% 00:10:06.501 lat (msec) : 4=0.57%, 10=26.97%, 20=67.54%, 50=4.92% 00:10:06.501 cpu : usr=2.50%, sys=5.59%, ctx=1071, majf=0, minf=1 00:10:06.501 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:06.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.501 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:06.501 issued rwts: total=4944,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.501 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:06.501 job2: (groupid=0, jobs=1): err= 0: pid=3339183: Tue Oct 8 18:15:19 2024 00:10:06.501 read: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec) 00:10:06.501 slat (usec): min=2, max=5376, avg=83.26, stdev=415.68 00:10:06.501 clat (usec): min=4171, max=23281, avg=10558.58, stdev=3372.09 00:10:06.501 lat (usec): min=4195, max=23285, avg=10641.84, stdev=3385.99 00:10:06.501 clat percentiles (usec): 00:10:06.501 | 1.00th=[ 4817], 5.00th=[ 5800], 10.00th=[ 6521], 20.00th=[ 7832], 00:10:06.501 | 30.00th=[ 8586], 40.00th=[ 9241], 50.00th=[10028], 60.00th=[11076], 00:10:06.501 | 70.00th=[12125], 80.00th=[13042], 90.00th=[14877], 95.00th=[16909], 00:10:06.501 | 99.00th=[21103], 99.50th=[21890], 99.90th=[22676], 99.95th=[22676], 00:10:06.501 | 99.99th=[23200] 00:10:06.501 write: IOPS=6157, BW=24.1MiB/s (25.2MB/s)(24.1MiB/1004msec); 0 zone resets 00:10:06.501 slat (usec): min=2, max=5153, avg=74.19, stdev=323.96 00:10:06.501 clat (usec): min=2329, max=20801, avg=9983.19, stdev=3225.38 00:10:06.501 lat (usec): min=3168, max=20821, avg=10057.39, stdev=3244.63 00:10:06.501 clat percentiles (usec): 00:10:06.501 | 1.00th=[ 4490], 5.00th=[ 5669], 10.00th=[ 6063], 20.00th=[ 7046], 00:10:06.501 | 30.00th=[ 8029], 40.00th=[ 8848], 50.00th=[ 9634], 60.00th=[10552], 00:10:06.501 | 70.00th=[11469], 80.00th=[12518], 90.00th=[14091], 95.00th=[15795], 00:10:06.501 | 99.00th=[19530], 99.50th=[19530], 99.90th=[20317], 99.95th=[20579], 00:10:06.501 | 99.99th=[20841] 00:10:06.501 bw ( KiB/s): min=21120, max=28032, per=27.18%, avg=24576.00, stdev=4887.52, samples=2 00:10:06.501 iops : min= 5280, max= 7008, avg=6144.00, stdev=1221.88, samples=2 00:10:06.501 lat (msec) : 4=0.13%, 10=51.59%, 20=47.22%, 50=1.06% 00:10:06.501 cpu : usr=3.19%, sys=6.58%, ctx=1306, majf=0, minf=1 00:10:06.501 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:06.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.501 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:06.501 issued rwts: total=6144,6182,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.501 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:06.501 job3: (groupid=0, jobs=1): err= 0: pid=3339184: Tue Oct 8 18:15:19 2024 00:10:06.501 read: IOPS=5292, BW=20.7MiB/s (21.7MB/s)(20.7MiB/1003msec) 00:10:06.501 slat (usec): min=2, max=5614, avg=88.18, stdev=422.96 00:10:06.501 clat (usec): min=1504, max=20912, avg=11678.48, stdev=2971.75 00:10:06.501 lat (usec): min=4208, max=20921, avg=11766.66, stdev=2977.45 00:10:06.501 clat percentiles (usec): 00:10:06.501 | 1.00th=[ 5014], 5.00th=[ 6718], 10.00th=[ 7767], 20.00th=[ 8979], 00:10:06.501 | 30.00th=[10028], 40.00th=[10945], 50.00th=[11731], 60.00th=[12649], 00:10:06.501 | 70.00th=[13304], 80.00th=[14222], 90.00th=[15270], 95.00th=[16712], 00:10:06.501 | 99.00th=[18482], 99.50th=[18744], 99.90th=[20841], 99.95th=[20841], 00:10:06.501 | 99.99th=[20841] 00:10:06.501 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:10:06.501 slat (usec): min=2, max=5636, avg=89.57, stdev=435.04 00:10:06.501 clat (usec): min=3806, max=22566, avg=11509.25, stdev=3273.22 00:10:06.501 lat (usec): min=3809, max=22575, avg=11598.83, stdev=3281.92 00:10:06.501 clat percentiles (usec): 00:10:06.501 | 1.00th=[ 4490], 5.00th=[ 5866], 10.00th=[ 7242], 20.00th=[ 8848], 00:10:06.501 | 30.00th=[ 9896], 40.00th=[10683], 50.00th=[11600], 60.00th=[12256], 00:10:06.501 | 70.00th=[13042], 80.00th=[13829], 90.00th=[15270], 95.00th=[16712], 00:10:06.501 | 99.00th=[21103], 99.50th=[21890], 99.90th=[22152], 99.95th=[22676], 00:10:06.501 | 99.99th=[22676] 00:10:06.501 bw ( KiB/s): min=20480, max=24576, per=24.92%, avg=22528.00, stdev=2896.31, samples=2 00:10:06.501 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:10:06.501 lat (msec) : 2=0.01%, 4=0.13%, 10=30.55%, 20=68.31%, 50=1.01% 00:10:06.501 cpu : usr=3.39%, sys=5.59%, ctx=1147, majf=0, minf=1 00:10:06.501 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:06.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.501 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:06.501 issued rwts: total=5308,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.501 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:06.501 00:10:06.501 Run status group 0 (all jobs): 00:10:06.501 READ: bw=85.7MiB/s (89.9MB/s), 19.3MiB/s-23.9MiB/s (20.2MB/s-25.1MB/s), io=86.0MiB (90.2MB), run=1002-1004msec 00:10:06.501 WRITE: bw=88.3MiB/s (92.6MB/s), 20.0MiB/s-24.1MiB/s (20.9MB/s-25.2MB/s), io=88.6MiB (92.9MB), run=1002-1004msec 00:10:06.501 00:10:06.501 Disk stats (read/write): 00:10:06.501 nvme0n1: ios=4335/4608, merge=0/0, ticks=14426/14864, in_queue=29290, util=83.47% 00:10:06.501 nvme0n2: ios=4096/4242, merge=0/0, ticks=14295/14877, in_queue=29172, util=83.96% 00:10:06.501 nvme0n3: ios=4817/5120, merge=0/0, ticks=16498/14484, in_queue=30982, util=87.77% 00:10:06.501 nvme0n4: ios=4608/4670, merge=0/0, ticks=15877/15714, in_queue=31591, util=88.36% 00:10:06.501 18:15:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:06.501 18:15:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3339362 00:10:06.501 18:15:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:06.501 18:15:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:06.501 [global] 00:10:06.501 thread=1 00:10:06.501 invalidate=1 00:10:06.501 rw=read 00:10:06.501 time_based=1 00:10:06.501 runtime=10 00:10:06.501 ioengine=libaio 00:10:06.501 direct=1 00:10:06.501 bs=4096 00:10:06.501 iodepth=1 00:10:06.501 norandommap=1 00:10:06.501 numjobs=1 00:10:06.501 00:10:06.501 [job0] 00:10:06.501 filename=/dev/nvme0n1 00:10:06.501 [job1] 00:10:06.501 filename=/dev/nvme0n2 00:10:06.501 [job2] 00:10:06.501 filename=/dev/nvme0n3 00:10:06.501 [job3] 00:10:06.501 filename=/dev/nvme0n4 00:10:06.501 Could not set queue depth (nvme0n1) 00:10:06.501 Could not set queue depth (nvme0n2) 00:10:06.501 Could not set queue depth (nvme0n3) 00:10:06.501 Could not set queue depth (nvme0n4) 00:10:06.759 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:06.759 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:06.759 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:06.759 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:06.759 fio-3.35 00:10:06.759 Starting 4 threads 00:10:10.034 18:15:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:10.034 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=76308480, buflen=4096 00:10:10.034 fio: pid=3339483, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:10.034 18:15:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:10.034 18:15:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:10.034 18:15:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:10.034 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=109322240, buflen=4096 00:10:10.034 fio: pid=3339482, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:10.034 18:15:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:10.034 18:15:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:10.291 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=36552704, buflen=4096 00:10:10.291 fio: pid=3339478, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:10.291 18:15:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:10.291 18:15:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:10.550 18:15:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:10.550 18:15:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:10.550 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=36925440, buflen=4096 00:10:10.550 fio: pid=3339481, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:10.550 00:10:10.550 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3339478: Tue Oct 8 18:15:23 2024 00:10:10.550 read: IOPS=7796, BW=30.5MiB/s (31.9MB/s)(98.9MiB/3246msec) 00:10:10.550 slat (usec): min=8, max=12894, avg=10.43, stdev=98.39 00:10:10.550 clat (usec): min=41, max=512, avg=111.90, stdev=33.84 00:10:10.550 lat (usec): min=59, max=13036, avg=122.33, stdev=104.31 00:10:10.550 clat percentiles (usec): 00:10:10.550 | 1.00th=[ 55], 5.00th=[ 58], 10.00th=[ 62], 20.00th=[ 78], 00:10:10.550 | 30.00th=[ 85], 40.00th=[ 110], 50.00th=[ 122], 60.00th=[ 127], 00:10:10.550 | 70.00th=[ 133], 80.00th=[ 143], 90.00th=[ 151], 95.00th=[ 159], 00:10:10.550 | 99.00th=[ 188], 99.50th=[ 200], 99.90th=[ 215], 99.95th=[ 221], 00:10:10.550 | 99.99th=[ 293] 00:10:10.550 bw ( KiB/s): min=25888, max=46919, per=30.40%, avg=31527.83, stdev=7891.72, samples=6 00:10:10.550 iops : min= 6472, max=11729, avg=7881.83, stdev=1972.64, samples=6 00:10:10.550 lat (usec) : 50=0.02%, 100=37.98%, 250=61.98%, 500=0.01%, 750=0.01% 00:10:10.550 cpu : usr=2.62%, sys=9.06%, ctx=25311, majf=0, minf=1 00:10:10.550 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.550 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.550 issued rwts: total=25309,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.550 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.550 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3339481: Tue Oct 8 18:15:23 2024 00:10:10.550 read: IOPS=6857, BW=26.8MiB/s (28.1MB/s)(99.2MiB/3704msec) 00:10:10.550 slat (usec): min=6, max=245885, avg=19.43, stdev=1544.55 00:10:10.550 clat (usec): min=44, max=1122, avg=112.23, stdev=33.57 00:10:10.550 lat (usec): min=55, max=246001, avg=131.65, stdev=1544.98 00:10:10.550 clat percentiles (usec): 00:10:10.550 | 1.00th=[ 53], 5.00th=[ 59], 10.00th=[ 63], 20.00th=[ 86], 00:10:10.550 | 30.00th=[ 90], 40.00th=[ 96], 50.00th=[ 115], 60.00th=[ 124], 00:10:10.550 | 70.00th=[ 135], 80.00th=[ 147], 90.00th=[ 155], 95.00th=[ 161], 00:10:10.550 | 99.00th=[ 182], 99.50th=[ 194], 99.90th=[ 215], 99.95th=[ 221], 00:10:10.550 | 99.99th=[ 326] 00:10:10.550 bw ( KiB/s): min=18510, max=36872, per=27.51%, avg=28527.29, stdev=6133.96, samples=7 00:10:10.550 iops : min= 4627, max= 9218, avg=7131.71, stdev=1533.64, samples=7 00:10:10.550 lat (usec) : 50=0.26%, 100=43.24%, 250=56.48%, 500=0.02% 00:10:10.550 lat (msec) : 2=0.01% 00:10:10.550 cpu : usr=2.89%, sys=7.35%, ctx=25402, majf=0, minf=2 00:10:10.550 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.550 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.550 issued rwts: total=25400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.550 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.550 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3339482: Tue Oct 8 18:15:23 2024 00:10:10.550 read: IOPS=8489, BW=33.2MiB/s (34.8MB/s)(104MiB/3144msec) 00:10:10.550 slat (nsec): min=8496, max=40189, avg=9176.81, stdev=1068.77 00:10:10.550 clat (usec): min=70, max=452, avg=98.08, stdev=19.36 00:10:10.550 lat (usec): min=79, max=462, avg=107.25, stdev=19.46 00:10:10.550 clat percentiles (usec): 00:10:10.550 | 1.00th=[ 80], 5.00th=[ 82], 10.00th=[ 84], 20.00th=[ 86], 00:10:10.550 | 30.00th=[ 88], 40.00th=[ 89], 50.00th=[ 91], 60.00th=[ 93], 00:10:10.550 | 70.00th=[ 97], 80.00th=[ 112], 90.00th=[ 126], 95.00th=[ 135], 00:10:10.550 | 99.00th=[ 163], 99.50th=[ 169], 99.90th=[ 208], 99.95th=[ 215], 00:10:10.550 | 99.99th=[ 343] 00:10:10.550 bw ( KiB/s): min=19637, max=40152, per=33.71%, avg=34958.17, stdev=7866.35, samples=6 00:10:10.550 iops : min= 4909, max=10038, avg=8739.50, stdev=1966.69, samples=6 00:10:10.550 lat (usec) : 100=74.39%, 250=25.58%, 500=0.02% 00:10:10.550 cpu : usr=2.74%, sys=9.86%, ctx=26692, majf=0, minf=2 00:10:10.550 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.550 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.550 issued rwts: total=26691,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.550 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.550 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3339483: Tue Oct 8 18:15:23 2024 00:10:10.550 read: IOPS=6864, BW=26.8MiB/s (28.1MB/s)(72.8MiB/2714msec) 00:10:10.550 slat (nsec): min=3161, max=40546, avg=9402.78, stdev=1530.39 00:10:10.550 clat (usec): min=77, max=403, avg=133.42, stdev=20.43 00:10:10.550 lat (usec): min=86, max=440, avg=142.83, stdev=20.84 00:10:10.550 clat percentiles (usec): 00:10:10.550 | 1.00th=[ 88], 5.00th=[ 96], 10.00th=[ 104], 20.00th=[ 120], 00:10:10.550 | 30.00th=[ 125], 40.00th=[ 129], 50.00th=[ 133], 60.00th=[ 139], 00:10:10.550 | 70.00th=[ 145], 80.00th=[ 151], 90.00th=[ 157], 95.00th=[ 163], 00:10:10.550 | 99.00th=[ 184], 99.50th=[ 192], 99.90th=[ 210], 99.95th=[ 217], 00:10:10.550 | 99.99th=[ 371] 00:10:10.550 bw ( KiB/s): min=26336, max=30352, per=27.03%, avg=28027.20, stdev=1449.25, samples=5 00:10:10.550 iops : min= 6584, max= 7588, avg=7006.80, stdev=362.31, samples=5 00:10:10.550 lat (usec) : 100=8.00%, 250=91.98%, 500=0.02% 00:10:10.550 cpu : usr=2.69%, sys=7.63%, ctx=18631, majf=0, minf=2 00:10:10.550 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.550 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.550 issued rwts: total=18631,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.550 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.550 00:10:10.550 Run status group 0 (all jobs): 00:10:10.550 READ: bw=101MiB/s (106MB/s), 26.8MiB/s-33.2MiB/s (28.1MB/s-34.8MB/s), io=375MiB (393MB), run=2714-3704msec 00:10:10.550 00:10:10.550 Disk stats (read/write): 00:10:10.550 nvme0n1: ios=24770/0, merge=0/0, ticks=2630/0, in_queue=2630, util=94.92% 00:10:10.550 nvme0n2: ios=25400/0, merge=0/0, ticks=2898/0, in_queue=2898, util=88.82% 00:10:10.550 nvme0n3: ios=26691/0, merge=0/0, ticks=2586/0, in_queue=2586, util=93.88% 00:10:10.550 nvme0n4: ios=18100/0, merge=0/0, ticks=2308/0, in_queue=2308, util=96.44% 00:10:10.808 18:15:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:10.808 18:15:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:11.065 18:15:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:11.065 18:15:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:11.323 18:15:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:11.323 18:15:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:11.581 18:15:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:11.581 18:15:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3339362 00:10:11.581 18:15:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:11.581 18:15:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:12.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.512 18:15:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:12.512 18:15:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:12.512 18:15:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:12.512 18:15:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:12.512 18:15:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:12.512 18:15:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:12.512 18:15:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:12.512 18:15:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:12.512 18:15:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:12.512 nvmf hotplug test: fio failed as expected 00:10:12.512 18:15:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:12.770 18:15:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:12.770 18:15:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:12.770 18:15:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:12.770 18:15:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:12.770 18:15:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:12.770 18:15:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:12.770 18:15:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:12.770 18:15:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:12.770 18:15:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:12.770 18:15:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:12.770 18:15:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:12.770 18:15:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:12.770 rmmod nvme_rdma 00:10:12.770 rmmod nvme_fabrics 00:10:12.770 18:15:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:12.770 18:15:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:12.770 18:15:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:12.770 18:15:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 3336967 ']' 00:10:12.770 18:15:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 3336967 00:10:12.770 18:15:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 3336967 ']' 00:10:12.771 18:15:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 3336967 00:10:12.771 18:15:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:12.771 18:15:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:12.771 18:15:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3336967 00:10:12.771 18:15:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:12.771 18:15:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:12.771 18:15:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3336967' 00:10:12.771 killing process with pid 3336967 00:10:12.771 18:15:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 3336967 00:10:12.771 18:15:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 3336967 00:10:13.029 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:13.029 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:10:13.029 00:10:13.029 real 0m27.704s 00:10:13.029 user 1m41.004s 00:10:13.029 sys 0m10.876s 00:10:13.029 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:13.029 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.029 ************************************ 00:10:13.029 END TEST nvmf_fio_target 00:10:13.029 ************************************ 00:10:13.029 18:15:26 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:10:13.029 18:15:26 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:13.029 18:15:26 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:13.029 18:15:26 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:13.288 ************************************ 00:10:13.288 START TEST nvmf_bdevio 00:10:13.288 ************************************ 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:10:13.288 * Looking for test storage... 00:10:13.288 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:13.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.288 --rc genhtml_branch_coverage=1 00:10:13.288 --rc genhtml_function_coverage=1 00:10:13.288 --rc genhtml_legend=1 00:10:13.288 --rc geninfo_all_blocks=1 00:10:13.288 --rc geninfo_unexecuted_blocks=1 00:10:13.288 00:10:13.288 ' 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:13.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.288 --rc genhtml_branch_coverage=1 00:10:13.288 --rc genhtml_function_coverage=1 00:10:13.288 --rc genhtml_legend=1 00:10:13.288 --rc geninfo_all_blocks=1 00:10:13.288 --rc geninfo_unexecuted_blocks=1 00:10:13.288 00:10:13.288 ' 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:13.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.288 --rc genhtml_branch_coverage=1 00:10:13.288 --rc genhtml_function_coverage=1 00:10:13.288 --rc genhtml_legend=1 00:10:13.288 --rc geninfo_all_blocks=1 00:10:13.288 --rc geninfo_unexecuted_blocks=1 00:10:13.288 00:10:13.288 ' 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:13.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.288 --rc genhtml_branch_coverage=1 00:10:13.288 --rc genhtml_function_coverage=1 00:10:13.288 --rc genhtml_legend=1 00:10:13.288 --rc geninfo_all_blocks=1 00:10:13.288 --rc geninfo_unexecuted_blocks=1 00:10:13.288 00:10:13.288 ' 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.288 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.289 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.289 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:13.289 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.289 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:13.289 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:13.289 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:13.289 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:13.289 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:13.289 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:13.289 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:13.289 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:13.289 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:13.289 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:13.289 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:13.289 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:13.289 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:13.289 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:13.289 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:10:13.289 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:13.289 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:13.289 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:13.289 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:13.289 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.289 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:13.289 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.548 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:13.548 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:13.548 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:13.548 18:15:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:10:20.122 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:10:20.122 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:10:20.122 Found net devices under 0000:18:00.0: mlx_0_0 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:10:20.122 Found net devices under 0000:18:00.1: mlx_0_1 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # rdma_device_init 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # uname 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@528 -- # allocate_nic_ips 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:10:20.122 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:20.123 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:20.123 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:10:20.123 altname enp24s0f0np0 00:10:20.123 altname ens785f0np0 00:10:20.123 inet 192.168.100.8/24 scope global mlx_0_0 00:10:20.123 valid_lft forever preferred_lft forever 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:20.123 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:20.123 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:10:20.123 altname enp24s0f1np1 00:10:20.123 altname ens785f1np1 00:10:20.123 inet 192.168.100.9/24 scope global mlx_0_1 00:10:20.123 valid_lft forever preferred_lft forever 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:20.123 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:10:20.383 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:20.383 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:20.383 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:20.383 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:20.383 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:20.383 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:20.383 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:20.383 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:20.383 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:20.383 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:20.383 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:20.383 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:20.383 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:10:20.383 192.168.100.9' 00:10:20.383 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:10:20.383 192.168.100.9' 00:10:20.383 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # head -n 1 00:10:20.383 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:20.383 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:10:20.383 192.168.100.9' 00:10:20.383 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # tail -n +2 00:10:20.383 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # head -n 1 00:10:20.383 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:20.383 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:10:20.383 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:20.383 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:10:20.383 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:10:20.383 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:10:20.383 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:20.383 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:20.383 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:20.383 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:20.383 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=3343327 00:10:20.383 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:20.383 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 3343327 00:10:20.383 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 3343327 ']' 00:10:20.383 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.383 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:20.383 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.383 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:20.383 18:15:33 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:20.383 [2024-10-08 18:15:33.423881] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:10:20.383 [2024-10-08 18:15:33.423944] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.383 [2024-10-08 18:15:33.508030] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:20.642 [2024-10-08 18:15:33.595886] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:20.642 [2024-10-08 18:15:33.595924] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:20.642 [2024-10-08 18:15:33.595934] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:20.642 [2024-10-08 18:15:33.595942] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:20.642 [2024-10-08 18:15:33.595948] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:20.642 [2024-10-08 18:15:33.597380] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:10:20.642 [2024-10-08 18:15:33.597479] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:10:20.642 [2024-10-08 18:15:33.597581] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:10:20.642 [2024-10-08 18:15:33.597589] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:10:21.208 18:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:21.208 18:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:21.208 18:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:21.208 18:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:21.208 18:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:21.208 18:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:21.208 18:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:21.208 18:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.208 18:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:21.208 [2024-10-08 18:15:34.365364] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d30be0/0x1d350d0) succeed. 00:10:21.208 [2024-10-08 18:15:34.375973] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d32220/0x1d76770) succeed. 00:10:21.464 18:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.464 18:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:21.464 18:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.464 18:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:21.464 Malloc0 00:10:21.464 18:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.464 18:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:21.464 18:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.464 18:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:21.464 18:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.464 18:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:21.464 18:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.464 18:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:21.464 18:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.464 18:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:21.464 18:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.464 18:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:21.464 [2024-10-08 18:15:34.547236] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:21.464 18:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.464 18:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:21.464 18:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:21.464 18:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:10:21.464 18:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:10:21.464 18:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:21.464 18:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:21.464 { 00:10:21.464 "params": { 00:10:21.464 "name": "Nvme$subsystem", 00:10:21.464 "trtype": "$TEST_TRANSPORT", 00:10:21.464 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:21.464 "adrfam": "ipv4", 00:10:21.464 "trsvcid": "$NVMF_PORT", 00:10:21.464 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:21.464 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:21.464 "hdgst": ${hdgst:-false}, 00:10:21.464 "ddgst": ${ddgst:-false} 00:10:21.464 }, 00:10:21.464 "method": "bdev_nvme_attach_controller" 00:10:21.464 } 00:10:21.464 EOF 00:10:21.464 )") 00:10:21.464 18:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:10:21.464 18:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:10:21.464 18:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:10:21.464 18:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:21.464 "params": { 00:10:21.464 "name": "Nvme1", 00:10:21.464 "trtype": "rdma", 00:10:21.464 "traddr": "192.168.100.8", 00:10:21.464 "adrfam": "ipv4", 00:10:21.464 "trsvcid": "4420", 00:10:21.464 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:21.464 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:21.464 "hdgst": false, 00:10:21.464 "ddgst": false 00:10:21.464 }, 00:10:21.464 "method": "bdev_nvme_attach_controller" 00:10:21.464 }' 00:10:21.464 [2024-10-08 18:15:34.598600] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:10:21.464 [2024-10-08 18:15:34.598663] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3343525 ] 00:10:21.720 [2024-10-08 18:15:34.686802] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:21.720 [2024-10-08 18:15:34.772952] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:21.720 [2024-10-08 18:15:34.773054] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.720 [2024-10-08 18:15:34.773054] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:10:21.978 I/O targets: 00:10:21.978 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:21.978 00:10:21.978 00:10:21.978 CUnit - A unit testing framework for C - Version 2.1-3 00:10:21.978 http://cunit.sourceforge.net/ 00:10:21.978 00:10:21.978 00:10:21.978 Suite: bdevio tests on: Nvme1n1 00:10:21.978 Test: blockdev write read block ...passed 00:10:21.978 Test: blockdev write zeroes read block ...passed 00:10:21.978 Test: blockdev write zeroes read no split ...passed 00:10:21.978 Test: blockdev write zeroes read split ...passed 00:10:21.978 Test: blockdev write zeroes read split partial ...passed 00:10:21.978 Test: blockdev reset ...[2024-10-08 18:15:34.982337] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:21.978 [2024-10-08 18:15:35.005469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:10:21.978 [2024-10-08 18:15:35.032008] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:21.978 passed 00:10:21.978 Test: blockdev write read 8 blocks ...passed 00:10:21.978 Test: blockdev write read size > 128k ...passed 00:10:21.978 Test: blockdev write read invalid size ...passed 00:10:21.978 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:21.978 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:21.978 Test: blockdev write read max offset ...passed 00:10:21.978 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:21.978 Test: blockdev writev readv 8 blocks ...passed 00:10:21.978 Test: blockdev writev readv 30 x 1block ...passed 00:10:21.978 Test: blockdev writev readv block ...passed 00:10:21.978 Test: blockdev writev readv size > 128k ...passed 00:10:21.978 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:21.978 Test: blockdev comparev and writev ...[2024-10-08 18:15:35.035433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:21.978 [2024-10-08 18:15:35.035464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:21.978 [2024-10-08 18:15:35.035477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:21.978 [2024-10-08 18:15:35.035488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:21.978 [2024-10-08 18:15:35.035639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:21.978 [2024-10-08 18:15:35.035655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:21.978 [2024-10-08 18:15:35.035666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:21.978 [2024-10-08 18:15:35.035676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:21.978 [2024-10-08 18:15:35.035845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:21.978 [2024-10-08 18:15:35.035858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:21.978 [2024-10-08 18:15:35.035868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:21.978 [2024-10-08 18:15:35.035878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:21.978 [2024-10-08 18:15:35.036059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:21.978 [2024-10-08 18:15:35.036072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:21.978 [2024-10-08 18:15:35.036083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:21.978 [2024-10-08 18:15:35.036093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:21.978 passed 00:10:21.978 Test: blockdev nvme passthru rw ...passed 00:10:21.978 Test: blockdev nvme passthru vendor specific ...[2024-10-08 18:15:35.036399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:10:21.978 [2024-10-08 18:15:35.036412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:21.978 [2024-10-08 18:15:35.036457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:10:21.978 [2024-10-08 18:15:35.036468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:21.978 [2024-10-08 18:15:35.036509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:10:21.978 [2024-10-08 18:15:35.036519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:21.978 [2024-10-08 18:15:35.036563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:10:21.978 [2024-10-08 18:15:35.036574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:21.978 passed 00:10:21.978 Test: blockdev nvme admin passthru ...passed 00:10:21.978 Test: blockdev copy ...passed 00:10:21.978 00:10:21.978 Run Summary: Type Total Ran Passed Failed Inactive 00:10:21.978 suites 1 1 n/a 0 0 00:10:21.978 tests 23 23 23 0 0 00:10:21.978 asserts 152 152 152 0 n/a 00:10:21.978 00:10:21.978 Elapsed time = 0.175 seconds 00:10:22.236 18:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:22.236 18:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.236 18:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.236 18:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.236 18:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:22.236 18:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:22.236 18:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:22.236 18:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:22.236 18:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:22.236 18:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:22.236 18:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:22.236 18:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:22.236 18:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:22.236 rmmod nvme_rdma 00:10:22.236 rmmod nvme_fabrics 00:10:22.236 18:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:22.236 18:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:22.236 18:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:22.236 18:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 3343327 ']' 00:10:22.236 18:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 3343327 00:10:22.236 18:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 3343327 ']' 00:10:22.236 18:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 3343327 00:10:22.236 18:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:22.236 18:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:22.236 18:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3343327 00:10:22.236 18:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:22.236 18:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:22.236 18:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3343327' 00:10:22.236 killing process with pid 3343327 00:10:22.236 18:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 3343327 00:10:22.236 18:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 3343327 00:10:22.805 18:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:22.805 18:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:10:22.805 00:10:22.805 real 0m9.500s 00:10:22.805 user 0m11.534s 00:10:22.805 sys 0m6.074s 00:10:22.805 18:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:22.805 18:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.805 ************************************ 00:10:22.805 END TEST nvmf_bdevio 00:10:22.805 ************************************ 00:10:22.805 18:15:35 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:22.805 00:10:22.805 real 4m22.485s 00:10:22.805 user 10m50.177s 00:10:22.805 sys 1m39.139s 00:10:22.805 18:15:35 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:22.805 18:15:35 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:22.805 ************************************ 00:10:22.805 END TEST nvmf_target_core 00:10:22.805 ************************************ 00:10:22.805 18:15:35 nvmf_rdma -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:10:22.805 18:15:35 nvmf_rdma -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:22.805 18:15:35 nvmf_rdma -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:22.805 18:15:35 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:10:22.805 ************************************ 00:10:22.805 START TEST nvmf_target_extra 00:10:22.805 ************************************ 00:10:22.805 18:15:35 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:10:22.805 * Looking for test storage... 00:10:22.805 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:10:22.805 18:15:35 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:22.805 18:15:35 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:10:22.805 18:15:35 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:23.064 18:15:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:23.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.065 --rc genhtml_branch_coverage=1 00:10:23.065 --rc genhtml_function_coverage=1 00:10:23.065 --rc genhtml_legend=1 00:10:23.065 --rc geninfo_all_blocks=1 00:10:23.065 --rc geninfo_unexecuted_blocks=1 00:10:23.065 00:10:23.065 ' 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:23.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.065 --rc genhtml_branch_coverage=1 00:10:23.065 --rc genhtml_function_coverage=1 00:10:23.065 --rc genhtml_legend=1 00:10:23.065 --rc geninfo_all_blocks=1 00:10:23.065 --rc geninfo_unexecuted_blocks=1 00:10:23.065 00:10:23.065 ' 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:23.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.065 --rc genhtml_branch_coverage=1 00:10:23.065 --rc genhtml_function_coverage=1 00:10:23.065 --rc genhtml_legend=1 00:10:23.065 --rc geninfo_all_blocks=1 00:10:23.065 --rc geninfo_unexecuted_blocks=1 00:10:23.065 00:10:23.065 ' 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:23.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.065 --rc genhtml_branch_coverage=1 00:10:23.065 --rc genhtml_function_coverage=1 00:10:23.065 --rc genhtml_legend=1 00:10:23.065 --rc geninfo_all_blocks=1 00:10:23.065 --rc geninfo_unexecuted_blocks=1 00:10:23.065 00:10:23.065 ' 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:23.065 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:23.065 ************************************ 00:10:23.065 START TEST nvmf_example 00:10:23.065 ************************************ 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:10:23.065 * Looking for test storage... 00:10:23.065 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lcov --version 00:10:23.065 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:23.325 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:23.325 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:23.325 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:23.325 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:23.325 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:23.325 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:23.325 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:23.325 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:23.325 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:23.325 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:23.325 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:23.325 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:23.325 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:23.325 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:23.325 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:23.325 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:23.325 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:23.325 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:23.325 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:23.325 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:23.325 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:23.325 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:23.325 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:23.325 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:23.325 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:23.325 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:23.325 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:23.325 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:23.325 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:23.325 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:23.325 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:23.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.325 --rc genhtml_branch_coverage=1 00:10:23.325 --rc genhtml_function_coverage=1 00:10:23.325 --rc genhtml_legend=1 00:10:23.325 --rc geninfo_all_blocks=1 00:10:23.325 --rc geninfo_unexecuted_blocks=1 00:10:23.325 00:10:23.325 ' 00:10:23.325 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:23.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.325 --rc genhtml_branch_coverage=1 00:10:23.326 --rc genhtml_function_coverage=1 00:10:23.326 --rc genhtml_legend=1 00:10:23.326 --rc geninfo_all_blocks=1 00:10:23.326 --rc geninfo_unexecuted_blocks=1 00:10:23.326 00:10:23.326 ' 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:23.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.326 --rc genhtml_branch_coverage=1 00:10:23.326 --rc genhtml_function_coverage=1 00:10:23.326 --rc genhtml_legend=1 00:10:23.326 --rc geninfo_all_blocks=1 00:10:23.326 --rc geninfo_unexecuted_blocks=1 00:10:23.326 00:10:23.326 ' 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:23.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.326 --rc genhtml_branch_coverage=1 00:10:23.326 --rc genhtml_function_coverage=1 00:10:23.326 --rc genhtml_legend=1 00:10:23.326 --rc geninfo_all_blocks=1 00:10:23.326 --rc geninfo_unexecuted_blocks=1 00:10:23.326 00:10:23.326 ' 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:23.326 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:23.326 18:15:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:29.900 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:29.900 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:29.900 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:29.900 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:29.900 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:29.900 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:29.900 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:29.900 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:29.900 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:29.900 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:29.900 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:29.900 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:29.900 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:29.900 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:29.900 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:29.900 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:29.900 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:29.900 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:29.900 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:29.900 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:29.900 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:29.900 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:29.900 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:29.900 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:29.900 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:29.900 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:29.900 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:29.900 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:29.900 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:29.900 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:29.900 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:29.900 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:29.900 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:29.900 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:29.900 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:29.900 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:10:29.901 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:10:29.901 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:10:29.901 Found net devices under 0000:18:00.0: mlx_0_0 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:10:29.901 Found net devices under 0000:18:00.1: mlx_0_1 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # is_hw=yes 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # rdma_device_init 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # uname 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:29.901 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@528 -- # allocate_nic_ips 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:30.161 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:30.161 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:10:30.161 altname enp24s0f0np0 00:10:30.161 altname ens785f0np0 00:10:30.161 inet 192.168.100.8/24 scope global mlx_0_0 00:10:30.161 valid_lft forever preferred_lft forever 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:30.161 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:30.161 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:10:30.161 altname enp24s0f1np1 00:10:30.161 altname ens785f1np1 00:10:30.161 inet 192.168.100.9/24 scope global mlx_0_1 00:10:30.161 valid_lft forever preferred_lft forever 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # return 0 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:30.161 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:10:30.162 192.168.100.9' 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:10:30.162 192.168.100.9' 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # head -n 1 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:10:30.162 192.168.100.9' 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # tail -n +2 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # head -n 1 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3346688 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3346688 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 3346688 ']' 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:30.162 18:15:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:31.097 18:15:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:31.097 18:15:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:10:31.097 18:15:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:31.097 18:15:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:31.097 18:15:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:31.097 18:15:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:31.097 18:15:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.097 18:15:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:31.356 18:15:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.356 18:15:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:31.356 18:15:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.356 18:15:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:31.356 18:15:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.356 18:15:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:31.356 18:15:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:31.356 18:15:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.356 18:15:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:31.356 18:15:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.356 18:15:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:31.356 18:15:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:31.356 18:15:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.356 18:15:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:31.356 18:15:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.356 18:15:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:31.356 18:15:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.356 18:15:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:31.356 18:15:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.356 18:15:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:31.356 18:15:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:43.634 Initializing NVMe Controllers 00:10:43.634 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:10:43.634 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:43.634 Initialization complete. Launching workers. 00:10:43.634 ======================================================== 00:10:43.634 Latency(us) 00:10:43.634 Device Information : IOPS MiB/s Average min max 00:10:43.634 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 25051.30 97.86 2554.98 637.75 12108.05 00:10:43.634 ======================================================== 00:10:43.634 Total : 25051.30 97.86 2554.98 637.75 12108.05 00:10:43.634 00:10:43.634 18:15:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:43.634 18:15:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:43.634 18:15:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:43.634 18:15:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:43.634 18:15:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:43.634 18:15:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:43.634 18:15:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:43.634 18:15:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:43.634 18:15:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:43.634 rmmod nvme_rdma 00:10:43.634 rmmod nvme_fabrics 00:10:43.634 18:15:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:43.634 18:15:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:43.634 18:15:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:43.634 18:15:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@515 -- # '[' -n 3346688 ']' 00:10:43.634 18:15:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # killprocess 3346688 00:10:43.634 18:15:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 3346688 ']' 00:10:43.634 18:15:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 3346688 00:10:43.634 18:15:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:10:43.634 18:15:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:43.634 18:15:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3346688 00:10:43.634 18:15:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:10:43.634 18:15:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:10:43.634 18:15:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3346688' 00:10:43.634 killing process with pid 3346688 00:10:43.634 18:15:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 3346688 00:10:43.634 18:15:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 3346688 00:10:43.634 nvmf threads initialize successfully 00:10:43.634 bdev subsystem init successfully 00:10:43.634 created a nvmf target service 00:10:43.634 create targets's poll groups done 00:10:43.634 all subsystems of target started 00:10:43.634 nvmf target is running 00:10:43.634 all subsystems of target stopped 00:10:43.634 destroy targets's poll groups done 00:10:43.634 destroyed the nvmf target service 00:10:43.634 bdev subsystem finish successfully 00:10:43.634 nvmf threads destroy successfully 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:43.634 00:10:43.634 real 0m20.055s 00:10:43.634 user 0m52.547s 00:10:43.634 sys 0m5.841s 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:43.634 ************************************ 00:10:43.634 END TEST nvmf_example 00:10:43.634 ************************************ 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:43.634 ************************************ 00:10:43.634 START TEST nvmf_filesystem 00:10:43.634 ************************************ 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:10:43.634 * Looking for test storage... 00:10:43.634 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:43.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.634 --rc genhtml_branch_coverage=1 00:10:43.634 --rc genhtml_function_coverage=1 00:10:43.634 --rc genhtml_legend=1 00:10:43.634 --rc geninfo_all_blocks=1 00:10:43.634 --rc geninfo_unexecuted_blocks=1 00:10:43.634 00:10:43.634 ' 00:10:43.634 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:43.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.634 --rc genhtml_branch_coverage=1 00:10:43.634 --rc genhtml_function_coverage=1 00:10:43.635 --rc genhtml_legend=1 00:10:43.635 --rc geninfo_all_blocks=1 00:10:43.635 --rc geninfo_unexecuted_blocks=1 00:10:43.635 00:10:43.635 ' 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:43.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.635 --rc genhtml_branch_coverage=1 00:10:43.635 --rc genhtml_function_coverage=1 00:10:43.635 --rc genhtml_legend=1 00:10:43.635 --rc geninfo_all_blocks=1 00:10:43.635 --rc geninfo_unexecuted_blocks=1 00:10:43.635 00:10:43.635 ' 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:43.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.635 --rc genhtml_branch_coverage=1 00:10:43.635 --rc genhtml_function_coverage=1 00:10:43.635 --rc genhtml_legend=1 00:10:43.635 --rc geninfo_all_blocks=1 00:10:43.635 --rc geninfo_unexecuted_blocks=1 00:10:43.635 00:10:43.635 ' 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=n 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:43.635 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:43.636 #define SPDK_CONFIG_H 00:10:43.636 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:43.636 #define SPDK_CONFIG_APPS 1 00:10:43.636 #define SPDK_CONFIG_ARCH native 00:10:43.636 #undef SPDK_CONFIG_ASAN 00:10:43.636 #undef SPDK_CONFIG_AVAHI 00:10:43.636 #undef SPDK_CONFIG_CET 00:10:43.636 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:43.636 #define SPDK_CONFIG_COVERAGE 1 00:10:43.636 #define SPDK_CONFIG_CROSS_PREFIX 00:10:43.636 #undef SPDK_CONFIG_CRYPTO 00:10:43.636 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:43.636 #undef SPDK_CONFIG_CUSTOMOCF 00:10:43.636 #undef SPDK_CONFIG_DAOS 00:10:43.636 #define SPDK_CONFIG_DAOS_DIR 00:10:43.636 #define SPDK_CONFIG_DEBUG 1 00:10:43.636 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:43.636 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:10:43.636 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:43.636 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:43.636 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:43.636 #undef SPDK_CONFIG_DPDK_UADK 00:10:43.636 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:10:43.636 #define SPDK_CONFIG_EXAMPLES 1 00:10:43.636 #undef SPDK_CONFIG_FC 00:10:43.636 #define SPDK_CONFIG_FC_PATH 00:10:43.636 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:43.636 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:43.636 #define SPDK_CONFIG_FSDEV 1 00:10:43.636 #undef SPDK_CONFIG_FUSE 00:10:43.636 #undef SPDK_CONFIG_FUZZER 00:10:43.636 #define SPDK_CONFIG_FUZZER_LIB 00:10:43.636 #undef SPDK_CONFIG_GOLANG 00:10:43.636 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:43.636 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:43.636 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:43.636 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:43.636 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:43.636 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:43.636 #undef SPDK_CONFIG_HAVE_LZ4 00:10:43.636 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:43.636 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:43.636 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:43.636 #define SPDK_CONFIG_IDXD 1 00:10:43.636 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:43.636 #undef SPDK_CONFIG_IPSEC_MB 00:10:43.636 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:43.636 #define SPDK_CONFIG_ISAL 1 00:10:43.636 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:43.636 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:43.636 #define SPDK_CONFIG_LIBDIR 00:10:43.636 #undef SPDK_CONFIG_LTO 00:10:43.636 #define SPDK_CONFIG_MAX_LCORES 128 00:10:43.636 #define SPDK_CONFIG_NVME_CUSE 1 00:10:43.636 #undef SPDK_CONFIG_OCF 00:10:43.636 #define SPDK_CONFIG_OCF_PATH 00:10:43.636 #define SPDK_CONFIG_OPENSSL_PATH 00:10:43.636 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:43.636 #define SPDK_CONFIG_PGO_DIR 00:10:43.636 #undef SPDK_CONFIG_PGO_USE 00:10:43.636 #define SPDK_CONFIG_PREFIX /usr/local 00:10:43.636 #undef SPDK_CONFIG_RAID5F 00:10:43.636 #undef SPDK_CONFIG_RBD 00:10:43.636 #define SPDK_CONFIG_RDMA 1 00:10:43.636 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:43.636 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:43.636 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:43.636 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:43.636 #define SPDK_CONFIG_SHARED 1 00:10:43.636 #undef SPDK_CONFIG_SMA 00:10:43.636 #define SPDK_CONFIG_TESTS 1 00:10:43.636 #undef SPDK_CONFIG_TSAN 00:10:43.636 #define SPDK_CONFIG_UBLK 1 00:10:43.636 #define SPDK_CONFIG_UBSAN 1 00:10:43.636 #undef SPDK_CONFIG_UNIT_TESTS 00:10:43.636 #undef SPDK_CONFIG_URING 00:10:43.636 #define SPDK_CONFIG_URING_PATH 00:10:43.636 #undef SPDK_CONFIG_URING_ZNS 00:10:43.636 #undef SPDK_CONFIG_USDT 00:10:43.636 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:43.636 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:43.636 #undef SPDK_CONFIG_VFIO_USER 00:10:43.636 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:43.636 #define SPDK_CONFIG_VHOST 1 00:10:43.636 #define SPDK_CONFIG_VIRTIO 1 00:10:43.636 #undef SPDK_CONFIG_VTUNE 00:10:43.636 #define SPDK_CONFIG_VTUNE_DIR 00:10:43.636 #define SPDK_CONFIG_WERROR 1 00:10:43.636 #define SPDK_CONFIG_WPDK_DIR 00:10:43.636 #undef SPDK_CONFIG_XNVME 00:10:43.636 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:43.636 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:43.637 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j72 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=rdma 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 3348427 ]] 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 3348427 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:10:43.638 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.GPJoqL 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.GPJoqL/tests/target /tmp/spdk.GPJoqL 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=785162240 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=4499267584 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=55658262528 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=61734432768 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=6076170240 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/sda1 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=xfs 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=221821267968 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=239938535424 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=18117267456 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30853754880 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30867214336 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=13459456 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12324048896 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12346888192 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=22839296 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30866550784 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30867218432 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=667648 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=6173429760 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=6173442048 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:10:43.639 * Looking for test storage... 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=55658262528 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=8290762752 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:43.639 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1668 -- # set -o errtrace 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1673 -- # true 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1675 -- # xtrace_fd 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:43.639 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:43.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.640 --rc genhtml_branch_coverage=1 00:10:43.640 --rc genhtml_function_coverage=1 00:10:43.640 --rc genhtml_legend=1 00:10:43.640 --rc geninfo_all_blocks=1 00:10:43.640 --rc geninfo_unexecuted_blocks=1 00:10:43.640 00:10:43.640 ' 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:43.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.640 --rc genhtml_branch_coverage=1 00:10:43.640 --rc genhtml_function_coverage=1 00:10:43.640 --rc genhtml_legend=1 00:10:43.640 --rc geninfo_all_blocks=1 00:10:43.640 --rc geninfo_unexecuted_blocks=1 00:10:43.640 00:10:43.640 ' 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:43.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.640 --rc genhtml_branch_coverage=1 00:10:43.640 --rc genhtml_function_coverage=1 00:10:43.640 --rc genhtml_legend=1 00:10:43.640 --rc geninfo_all_blocks=1 00:10:43.640 --rc geninfo_unexecuted_blocks=1 00:10:43.640 00:10:43.640 ' 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:43.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.640 --rc genhtml_branch_coverage=1 00:10:43.640 --rc genhtml_function_coverage=1 00:10:43.640 --rc genhtml_legend=1 00:10:43.640 --rc geninfo_all_blocks=1 00:10:43.640 --rc geninfo_unexecuted_blocks=1 00:10:43.640 00:10:43.640 ' 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:43.640 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:43.640 18:15:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:50.212 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:50.212 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:50.212 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:50.212 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:50.212 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:50.212 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:50.212 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:50.212 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:50.212 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:50.212 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:50.212 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:50.212 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:50.212 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:50.212 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:50.212 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:50.212 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:50.212 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:50.212 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:50.212 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:50.212 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:50.212 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:50.212 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:50.212 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:50.212 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:50.212 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:50.212 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:50.212 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:50.212 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:50.212 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:50.212 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:50.212 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:50.212 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:50.212 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:50.212 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:50.212 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:50.213 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:10:50.213 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:10:50.213 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:10:50.474 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:10:50.474 Found net devices under 0000:18:00.0: mlx_0_0 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:10:50.474 Found net devices under 0000:18:00.1: mlx_0_1 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # is_hw=yes 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # rdma_device_init 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # uname 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@528 -- # allocate_nic_ips 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:50.474 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:50.474 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:10:50.474 altname enp24s0f0np0 00:10:50.474 altname ens785f0np0 00:10:50.474 inet 192.168.100.8/24 scope global mlx_0_0 00:10:50.474 valid_lft forever preferred_lft forever 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:50.474 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:50.474 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:50.475 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:10:50.475 altname enp24s0f1np1 00:10:50.475 altname ens785f1np1 00:10:50.475 inet 192.168.100.9/24 scope global mlx_0_1 00:10:50.475 valid_lft forever preferred_lft forever 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # return 0 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:10:50.475 192.168.100.9' 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:10:50.475 192.168.100.9' 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # head -n 1 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:10:50.475 192.168.100.9' 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # tail -n +2 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # head -n 1 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:10:50.475 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:10:50.740 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:50.740 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:50.740 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:50.740 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:50.740 ************************************ 00:10:50.740 START TEST nvmf_filesystem_no_in_capsule 00:10:50.740 ************************************ 00:10:50.740 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:10:50.740 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:50.740 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:50.740 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:50.740 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:50.740 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.740 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=3351437 00:10:50.740 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 3351437 00:10:50.740 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:50.740 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 3351437 ']' 00:10:50.740 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.740 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:50.740 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.740 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:50.740 18:16:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.740 [2024-10-08 18:16:03.751090] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:10:50.740 [2024-10-08 18:16:03.751147] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:50.740 [2024-10-08 18:16:03.837274] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:51.000 [2024-10-08 18:16:03.923636] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:51.000 [2024-10-08 18:16:03.923679] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:51.000 [2024-10-08 18:16:03.923688] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:51.000 [2024-10-08 18:16:03.923697] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:51.000 [2024-10-08 18:16:03.923704] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:51.000 [2024-10-08 18:16:03.925097] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:51.000 [2024-10-08 18:16:03.925205] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.000 [2024-10-08 18:16:03.925116] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:10:51.000 [2024-10-08 18:16:03.925207] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:10:51.570 18:16:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:51.570 18:16:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:51.570 18:16:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:51.570 18:16:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:51.570 18:16:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.570 18:16:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:51.570 18:16:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:51.570 18:16:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:10:51.570 18:16:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.570 18:16:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.570 [2024-10-08 18:16:04.649275] rdma.c:2735:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:10:51.570 [2024-10-08 18:16:04.671248] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x8592e0/0x85d7d0) succeed. 00:10:51.570 [2024-10-08 18:16:04.681810] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x85a920/0x89ee70) succeed. 00:10:51.830 18:16:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.830 18:16:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:51.830 18:16:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.830 18:16:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.830 Malloc1 00:10:51.830 18:16:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.830 18:16:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:51.830 18:16:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.830 18:16:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.830 18:16:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.830 18:16:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:51.830 18:16:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.830 18:16:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.830 18:16:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.830 18:16:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:51.830 18:16:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.830 18:16:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.830 [2024-10-08 18:16:04.942960] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:51.830 18:16:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.830 18:16:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:51.830 18:16:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:51.830 18:16:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:51.830 18:16:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:51.830 18:16:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:51.830 18:16:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:51.830 18:16:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.830 18:16:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.830 18:16:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.830 18:16:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:51.830 { 00:10:51.830 "name": "Malloc1", 00:10:51.830 "aliases": [ 00:10:51.830 "b22382a7-bff0-4cc8-a0e1-b10815a8d42c" 00:10:51.830 ], 00:10:51.830 "product_name": "Malloc disk", 00:10:51.831 "block_size": 512, 00:10:51.831 "num_blocks": 1048576, 00:10:51.831 "uuid": "b22382a7-bff0-4cc8-a0e1-b10815a8d42c", 00:10:51.831 "assigned_rate_limits": { 00:10:51.831 "rw_ios_per_sec": 0, 00:10:51.831 "rw_mbytes_per_sec": 0, 00:10:51.831 "r_mbytes_per_sec": 0, 00:10:51.831 "w_mbytes_per_sec": 0 00:10:51.831 }, 00:10:51.831 "claimed": true, 00:10:51.831 "claim_type": "exclusive_write", 00:10:51.831 "zoned": false, 00:10:51.831 "supported_io_types": { 00:10:51.831 "read": true, 00:10:51.831 "write": true, 00:10:51.831 "unmap": true, 00:10:51.831 "flush": true, 00:10:51.831 "reset": true, 00:10:51.831 "nvme_admin": false, 00:10:51.831 "nvme_io": false, 00:10:51.831 "nvme_io_md": false, 00:10:51.831 "write_zeroes": true, 00:10:51.831 "zcopy": true, 00:10:51.831 "get_zone_info": false, 00:10:51.831 "zone_management": false, 00:10:51.831 "zone_append": false, 00:10:51.831 "compare": false, 00:10:51.831 "compare_and_write": false, 00:10:51.831 "abort": true, 00:10:51.831 "seek_hole": false, 00:10:51.831 "seek_data": false, 00:10:51.831 "copy": true, 00:10:51.831 "nvme_iov_md": false 00:10:51.831 }, 00:10:51.831 "memory_domains": [ 00:10:51.831 { 00:10:51.831 "dma_device_id": "system", 00:10:51.831 "dma_device_type": 1 00:10:51.831 }, 00:10:51.831 { 00:10:51.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.831 "dma_device_type": 2 00:10:51.831 } 00:10:51.831 ], 00:10:51.831 "driver_specific": {} 00:10:51.831 } 00:10:51.831 ]' 00:10:51.831 18:16:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:52.091 18:16:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:52.091 18:16:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:52.091 18:16:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:52.091 18:16:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:52.091 18:16:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:52.091 18:16:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:52.091 18:16:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:53.032 18:16:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:53.032 18:16:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:53.032 18:16:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:53.032 18:16:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:53.032 18:16:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:54.941 18:16:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:54.941 18:16:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:54.941 18:16:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:54.941 18:16:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:54.941 18:16:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:54.941 18:16:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:55.200 18:16:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:55.200 18:16:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:55.200 18:16:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:55.200 18:16:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:55.200 18:16:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:55.200 18:16:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:55.200 18:16:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:55.200 18:16:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:55.200 18:16:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:55.200 18:16:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:55.200 18:16:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:55.200 18:16:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:55.200 18:16:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:56.582 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:56.582 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:56.582 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:56.582 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:56.582 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.582 ************************************ 00:10:56.582 START TEST filesystem_ext4 00:10:56.582 ************************************ 00:10:56.582 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:56.582 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:56.582 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:56.582 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:56.582 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:10:56.582 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:56.582 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:10:56.582 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:10:56.582 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:10:56.582 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:10:56.582 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:56.582 mke2fs 1.47.0 (5-Feb-2023) 00:10:56.582 Discarding device blocks: 0/522240 done 00:10:56.582 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:56.582 Filesystem UUID: 54f6565f-c487-427e-89dc-004c3b52e7e4 00:10:56.582 Superblock backups stored on blocks: 00:10:56.582 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:56.582 00:10:56.582 Allocating group tables: 0/64 done 00:10:56.582 Writing inode tables: 0/64 done 00:10:56.582 Creating journal (8192 blocks): done 00:10:56.582 Writing superblocks and filesystem accounting information: 0/64 done 00:10:56.582 00:10:56.582 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:10:56.582 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:56.582 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:56.582 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:56.582 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:56.582 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:56.582 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:56.582 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:56.582 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3351437 00:10:56.582 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:56.582 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:56.582 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:56.582 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:56.582 00:10:56.582 real 0m0.235s 00:10:56.582 user 0m0.038s 00:10:56.582 sys 0m0.106s 00:10:56.582 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:56.582 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:56.582 ************************************ 00:10:56.582 END TEST filesystem_ext4 00:10:56.582 ************************************ 00:10:56.582 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:56.582 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:56.582 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:56.582 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.582 ************************************ 00:10:56.583 START TEST filesystem_btrfs 00:10:56.583 ************************************ 00:10:56.583 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:56.583 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:56.583 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:56.583 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:56.583 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:10:56.583 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:56.583 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:10:56.583 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:10:56.583 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:10:56.583 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:10:56.583 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:56.844 btrfs-progs v6.8.1 00:10:56.844 See https://btrfs.readthedocs.io for more information. 00:10:56.844 00:10:56.844 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:56.844 NOTE: several default settings have changed in version 5.15, please make sure 00:10:56.844 this does not affect your deployments: 00:10:56.844 - DUP for metadata (-m dup) 00:10:56.844 - enabled no-holes (-O no-holes) 00:10:56.844 - enabled free-space-tree (-R free-space-tree) 00:10:56.844 00:10:56.844 Label: (null) 00:10:56.844 UUID: d23450c4-fdb3-4eb3-bc22-7594fce09751 00:10:56.844 Node size: 16384 00:10:56.844 Sector size: 4096 (CPU page size: 4096) 00:10:56.844 Filesystem size: 510.00MiB 00:10:56.844 Block group profiles: 00:10:56.844 Data: single 8.00MiB 00:10:56.844 Metadata: DUP 32.00MiB 00:10:56.844 System: DUP 8.00MiB 00:10:56.844 SSD detected: yes 00:10:56.844 Zoned device: no 00:10:56.844 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:56.844 Checksum: crc32c 00:10:56.844 Number of devices: 1 00:10:56.844 Devices: 00:10:56.844 ID SIZE PATH 00:10:56.844 1 510.00MiB /dev/nvme0n1p1 00:10:56.844 00:10:56.844 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:10:56.844 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:56.844 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:56.844 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:56.844 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:56.844 18:16:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:56.844 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:56.844 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:57.104 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3351437 00:10:57.104 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:57.104 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:57.104 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:57.104 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:57.104 00:10:57.104 real 0m0.362s 00:10:57.104 user 0m0.020s 00:10:57.104 sys 0m0.235s 00:10:57.104 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:57.104 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:57.104 ************************************ 00:10:57.104 END TEST filesystem_btrfs 00:10:57.104 ************************************ 00:10:57.104 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:57.104 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:57.104 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:57.104 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.104 ************************************ 00:10:57.104 START TEST filesystem_xfs 00:10:57.104 ************************************ 00:10:57.104 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:10:57.104 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:57.104 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:57.104 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:57.104 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:10:57.104 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:57.105 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:10:57.105 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:10:57.105 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:10:57.105 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:10:57.105 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:57.105 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:57.105 = sectsz=512 attr=2, projid32bit=1 00:10:57.105 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:57.105 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:57.105 data = bsize=4096 blocks=130560, imaxpct=25 00:10:57.105 = sunit=0 swidth=0 blks 00:10:57.105 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:57.105 log =internal log bsize=4096 blocks=16384, version=2 00:10:57.105 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:57.105 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:57.365 Discarding blocks...Done. 00:10:57.365 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:10:57.365 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:57.365 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:57.365 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:57.365 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:57.365 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:57.365 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:57.365 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:57.365 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3351437 00:10:57.366 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:57.366 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:57.366 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:57.366 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:57.366 00:10:57.366 real 0m0.269s 00:10:57.366 user 0m0.038s 00:10:57.366 sys 0m0.110s 00:10:57.366 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:57.366 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:57.366 ************************************ 00:10:57.366 END TEST filesystem_xfs 00:10:57.366 ************************************ 00:10:57.366 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:57.366 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:57.366 18:16:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:58.316 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.316 18:16:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:58.316 18:16:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:10:58.316 18:16:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:58.316 18:16:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:58.316 18:16:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:58.316 18:16:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:58.576 18:16:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:10:58.576 18:16:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:58.576 18:16:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.576 18:16:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:58.576 18:16:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.576 18:16:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:58.576 18:16:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3351437 00:10:58.576 18:16:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 3351437 ']' 00:10:58.576 18:16:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 3351437 00:10:58.576 18:16:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:10:58.576 18:16:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:58.576 18:16:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3351437 00:10:58.576 18:16:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:58.576 18:16:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:58.576 18:16:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3351437' 00:10:58.576 killing process with pid 3351437 00:10:58.576 18:16:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 3351437 00:10:58.576 18:16:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 3351437 00:10:59.146 18:16:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:59.146 00:10:59.146 real 0m8.331s 00:10:59.146 user 0m32.387s 00:10:59.146 sys 0m1.479s 00:10:59.146 18:16:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:59.146 18:16:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:59.146 ************************************ 00:10:59.146 END TEST nvmf_filesystem_no_in_capsule 00:10:59.146 ************************************ 00:10:59.146 18:16:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:59.146 18:16:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:59.146 18:16:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:59.147 18:16:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:59.147 ************************************ 00:10:59.147 START TEST nvmf_filesystem_in_capsule 00:10:59.147 ************************************ 00:10:59.147 18:16:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:10:59.147 18:16:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:59.147 18:16:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:59.147 18:16:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:59.147 18:16:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:59.147 18:16:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:59.147 18:16:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=3352771 00:10:59.147 18:16:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 3352771 00:10:59.147 18:16:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:59.147 18:16:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 3352771 ']' 00:10:59.147 18:16:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.147 18:16:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:59.147 18:16:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.147 18:16:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:59.147 18:16:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:59.147 [2024-10-08 18:16:12.168503] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:10:59.147 [2024-10-08 18:16:12.168566] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:59.147 [2024-10-08 18:16:12.254797] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:59.406 [2024-10-08 18:16:12.343640] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:59.406 [2024-10-08 18:16:12.343687] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:59.406 [2024-10-08 18:16:12.343696] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:59.406 [2024-10-08 18:16:12.343704] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:59.407 [2024-10-08 18:16:12.343711] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:59.407 [2024-10-08 18:16:12.345145] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:59.407 [2024-10-08 18:16:12.345248] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:10:59.407 [2024-10-08 18:16:12.345349] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.407 [2024-10-08 18:16:12.345350] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:10:59.976 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:59.976 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:59.976 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:59.976 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:59.976 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:59.976 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:59.976 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:59.976 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:10:59.976 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.976 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:59.976 [2024-10-08 18:16:13.093214] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x239f2e0/0x23a37d0) succeed. 00:10:59.976 [2024-10-08 18:16:13.103662] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x23a0920/0x23e4e70) succeed. 00:11:00.237 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.237 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:00.237 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.237 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.237 Malloc1 00:11:00.237 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.237 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:00.237 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.237 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.237 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.237 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:00.237 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.237 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.237 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.237 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:00.237 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.237 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.237 [2024-10-08 18:16:13.399253] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:00.237 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.237 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:00.237 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:00.237 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:00.237 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:00.237 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:00.237 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:00.496 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.496 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.496 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.496 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:00.496 { 00:11:00.496 "name": "Malloc1", 00:11:00.496 "aliases": [ 00:11:00.496 "72cf3e55-8fb8-4d94-a8a7-5afcc46424a8" 00:11:00.496 ], 00:11:00.496 "product_name": "Malloc disk", 00:11:00.496 "block_size": 512, 00:11:00.496 "num_blocks": 1048576, 00:11:00.496 "uuid": "72cf3e55-8fb8-4d94-a8a7-5afcc46424a8", 00:11:00.496 "assigned_rate_limits": { 00:11:00.496 "rw_ios_per_sec": 0, 00:11:00.496 "rw_mbytes_per_sec": 0, 00:11:00.496 "r_mbytes_per_sec": 0, 00:11:00.496 "w_mbytes_per_sec": 0 00:11:00.496 }, 00:11:00.496 "claimed": true, 00:11:00.496 "claim_type": "exclusive_write", 00:11:00.496 "zoned": false, 00:11:00.496 "supported_io_types": { 00:11:00.496 "read": true, 00:11:00.496 "write": true, 00:11:00.496 "unmap": true, 00:11:00.496 "flush": true, 00:11:00.496 "reset": true, 00:11:00.496 "nvme_admin": false, 00:11:00.496 "nvme_io": false, 00:11:00.496 "nvme_io_md": false, 00:11:00.496 "write_zeroes": true, 00:11:00.496 "zcopy": true, 00:11:00.496 "get_zone_info": false, 00:11:00.496 "zone_management": false, 00:11:00.496 "zone_append": false, 00:11:00.496 "compare": false, 00:11:00.496 "compare_and_write": false, 00:11:00.496 "abort": true, 00:11:00.496 "seek_hole": false, 00:11:00.496 "seek_data": false, 00:11:00.496 "copy": true, 00:11:00.496 "nvme_iov_md": false 00:11:00.496 }, 00:11:00.496 "memory_domains": [ 00:11:00.496 { 00:11:00.496 "dma_device_id": "system", 00:11:00.496 "dma_device_type": 1 00:11:00.496 }, 00:11:00.496 { 00:11:00.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.496 "dma_device_type": 2 00:11:00.496 } 00:11:00.496 ], 00:11:00.496 "driver_specific": {} 00:11:00.496 } 00:11:00.496 ]' 00:11:00.496 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:00.496 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:00.496 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:00.496 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:00.496 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:00.496 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:00.496 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:00.496 18:16:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:01.436 18:16:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:01.436 18:16:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:01.436 18:16:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:01.436 18:16:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:01.436 18:16:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:03.977 18:16:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:03.977 18:16:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:03.977 18:16:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:03.977 18:16:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:03.977 18:16:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:03.977 18:16:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:03.977 18:16:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:03.977 18:16:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:03.977 18:16:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:03.977 18:16:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:03.977 18:16:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:03.977 18:16:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:03.977 18:16:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:03.977 18:16:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:03.977 18:16:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:03.977 18:16:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:03.977 18:16:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:03.977 18:16:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:03.977 18:16:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:04.918 18:16:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:04.918 18:16:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:04.918 18:16:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:04.918 18:16:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:04.918 18:16:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.918 ************************************ 00:11:04.918 START TEST filesystem_in_capsule_ext4 00:11:04.918 ************************************ 00:11:04.918 18:16:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:04.918 18:16:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:04.918 18:16:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:04.918 18:16:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:04.918 18:16:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:04.918 18:16:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:04.918 18:16:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:04.918 18:16:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:04.918 18:16:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:04.918 18:16:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:04.918 18:16:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:04.918 mke2fs 1.47.0 (5-Feb-2023) 00:11:04.918 Discarding device blocks: 0/522240 done 00:11:04.918 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:04.918 Filesystem UUID: 1d18a935-df01-4780-91e8-481bfde69aed 00:11:04.918 Superblock backups stored on blocks: 00:11:04.918 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:04.918 00:11:04.918 Allocating group tables: 0/64 done 00:11:04.918 Writing inode tables: 0/64 done 00:11:04.918 Creating journal (8192 blocks): done 00:11:04.918 Writing superblocks and filesystem accounting information: 0/64 done 00:11:04.918 00:11:04.918 18:16:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:04.918 18:16:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:04.918 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:04.918 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:04.918 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:04.918 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:04.918 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:04.918 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:04.918 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3352771 00:11:04.918 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:04.918 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:04.918 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:04.918 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:05.178 00:11:05.178 real 0m0.238s 00:11:05.178 user 0m0.030s 00:11:05.178 sys 0m0.112s 00:11:05.178 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:05.178 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:05.178 ************************************ 00:11:05.178 END TEST filesystem_in_capsule_ext4 00:11:05.178 ************************************ 00:11:05.178 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:05.178 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:05.178 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:05.178 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.178 ************************************ 00:11:05.178 START TEST filesystem_in_capsule_btrfs 00:11:05.178 ************************************ 00:11:05.178 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:05.178 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:05.178 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:05.179 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:05.179 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:05.179 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:05.179 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:05.179 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:05.179 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:05.179 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:05.179 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:05.179 btrfs-progs v6.8.1 00:11:05.179 See https://btrfs.readthedocs.io for more information. 00:11:05.179 00:11:05.179 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:05.179 NOTE: several default settings have changed in version 5.15, please make sure 00:11:05.179 this does not affect your deployments: 00:11:05.179 - DUP for metadata (-m dup) 00:11:05.179 - enabled no-holes (-O no-holes) 00:11:05.179 - enabled free-space-tree (-R free-space-tree) 00:11:05.179 00:11:05.179 Label: (null) 00:11:05.179 UUID: 35f0b730-3579-4fc7-b580-ed2e5a07613c 00:11:05.179 Node size: 16384 00:11:05.179 Sector size: 4096 (CPU page size: 4096) 00:11:05.179 Filesystem size: 510.00MiB 00:11:05.179 Block group profiles: 00:11:05.179 Data: single 8.00MiB 00:11:05.179 Metadata: DUP 32.00MiB 00:11:05.179 System: DUP 8.00MiB 00:11:05.179 SSD detected: yes 00:11:05.179 Zoned device: no 00:11:05.179 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:05.179 Checksum: crc32c 00:11:05.179 Number of devices: 1 00:11:05.179 Devices: 00:11:05.179 ID SIZE PATH 00:11:05.179 1 510.00MiB /dev/nvme0n1p1 00:11:05.179 00:11:05.179 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:05.179 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:05.438 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:05.438 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:05.438 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:05.438 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:05.438 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:05.438 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:05.438 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3352771 00:11:05.438 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:05.438 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:05.438 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:05.438 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:05.438 00:11:05.438 real 0m0.361s 00:11:05.438 user 0m0.036s 00:11:05.438 sys 0m0.222s 00:11:05.438 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:05.438 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:05.438 ************************************ 00:11:05.438 END TEST filesystem_in_capsule_btrfs 00:11:05.438 ************************************ 00:11:05.438 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:05.438 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:05.438 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:05.438 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.698 ************************************ 00:11:05.698 START TEST filesystem_in_capsule_xfs 00:11:05.698 ************************************ 00:11:05.698 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:05.698 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:05.698 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:05.698 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:05.698 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:05.698 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:05.698 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:05.698 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:11:05.698 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:05.698 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:05.698 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:05.698 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:05.698 = sectsz=512 attr=2, projid32bit=1 00:11:05.698 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:05.698 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:05.698 data = bsize=4096 blocks=130560, imaxpct=25 00:11:05.698 = sunit=0 swidth=0 blks 00:11:05.698 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:05.698 log =internal log bsize=4096 blocks=16384, version=2 00:11:05.698 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:05.698 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:05.698 Discarding blocks...Done. 00:11:05.698 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:05.698 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:05.698 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:05.698 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:05.698 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:05.698 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:05.698 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:05.698 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:05.698 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3352771 00:11:05.699 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:05.699 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:05.958 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:05.958 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:05.958 00:11:05.958 real 0m0.261s 00:11:05.958 user 0m0.039s 00:11:05.958 sys 0m0.112s 00:11:05.958 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:05.958 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:05.958 ************************************ 00:11:05.958 END TEST filesystem_in_capsule_xfs 00:11:05.958 ************************************ 00:11:05.958 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:05.958 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:05.958 18:16:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:06.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.899 18:16:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:06.899 18:16:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:06.899 18:16:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:06.899 18:16:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:06.899 18:16:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:06.899 18:16:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:06.899 18:16:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:06.899 18:16:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:06.899 18:16:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.899 18:16:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:06.899 18:16:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.899 18:16:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:06.899 18:16:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3352771 00:11:06.899 18:16:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 3352771 ']' 00:11:06.899 18:16:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 3352771 00:11:06.899 18:16:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:06.899 18:16:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:06.899 18:16:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3352771 00:11:06.899 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:06.899 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:06.899 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3352771' 00:11:06.899 killing process with pid 3352771 00:11:06.899 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 3352771 00:11:06.899 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 3352771 00:11:07.480 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:07.480 00:11:07.480 real 0m8.408s 00:11:07.480 user 0m32.631s 00:11:07.480 sys 0m1.522s 00:11:07.480 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:07.480 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.480 ************************************ 00:11:07.480 END TEST nvmf_filesystem_in_capsule 00:11:07.480 ************************************ 00:11:07.480 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:07.480 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:07.480 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:07.480 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:07.480 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:07.480 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:07.480 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:07.480 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:07.480 rmmod nvme_rdma 00:11:07.480 rmmod nvme_fabrics 00:11:07.480 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:07.480 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:07.480 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:07.480 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:11:07.480 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:07.480 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:11:07.480 00:11:07.480 real 0m24.357s 00:11:07.480 user 1m7.306s 00:11:07.480 sys 0m8.576s 00:11:07.480 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:07.480 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:07.480 ************************************ 00:11:07.480 END TEST nvmf_filesystem 00:11:07.480 ************************************ 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:07.741 ************************************ 00:11:07.741 START TEST nvmf_target_discovery 00:11:07.741 ************************************ 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:11:07.741 * Looking for test storage... 00:11:07.741 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:07.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.741 --rc genhtml_branch_coverage=1 00:11:07.741 --rc genhtml_function_coverage=1 00:11:07.741 --rc genhtml_legend=1 00:11:07.741 --rc geninfo_all_blocks=1 00:11:07.741 --rc geninfo_unexecuted_blocks=1 00:11:07.741 00:11:07.741 ' 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:07.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.741 --rc genhtml_branch_coverage=1 00:11:07.741 --rc genhtml_function_coverage=1 00:11:07.741 --rc genhtml_legend=1 00:11:07.741 --rc geninfo_all_blocks=1 00:11:07.741 --rc geninfo_unexecuted_blocks=1 00:11:07.741 00:11:07.741 ' 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:07.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.741 --rc genhtml_branch_coverage=1 00:11:07.741 --rc genhtml_function_coverage=1 00:11:07.741 --rc genhtml_legend=1 00:11:07.741 --rc geninfo_all_blocks=1 00:11:07.741 --rc geninfo_unexecuted_blocks=1 00:11:07.741 00:11:07.741 ' 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:07.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.741 --rc genhtml_branch_coverage=1 00:11:07.741 --rc genhtml_function_coverage=1 00:11:07.741 --rc genhtml_legend=1 00:11:07.741 --rc geninfo_all_blocks=1 00:11:07.741 --rc geninfo_unexecuted_blocks=1 00:11:07.741 00:11:07.741 ' 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:07.741 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:07.742 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:07.742 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:07.742 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:07.742 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:07.742 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:07.742 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:07.742 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:08.002 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:11:08.002 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:11:08.002 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:08.002 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:08.002 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:08.002 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:08.002 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:08.002 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:08.003 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:08.003 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:08.003 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:08.003 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.003 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.003 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.003 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:08.003 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.003 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:08.003 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:08.003 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:08.003 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:08.003 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:08.003 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:08.003 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:08.003 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:08.003 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:08.003 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:08.003 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:08.003 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:08.003 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:08.003 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:08.003 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:08.003 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:08.003 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:11:08.003 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:08.003 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:08.003 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:08.003 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:08.003 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.003 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:08.003 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.003 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:08.003 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:08.003 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:08.003 18:16:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:14.629 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:11:14.630 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:11:14.630 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:14.630 Found net devices under 0000:18:00.0: mlx_0_0 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:14.630 Found net devices under 0000:18:00.1: mlx_0_1 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # rdma_device_init 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # uname 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@528 -- # allocate_nic_ips 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:14.630 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:14.631 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:14.631 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:11:14.631 altname enp24s0f0np0 00:11:14.631 altname ens785f0np0 00:11:14.631 inet 192.168.100.8/24 scope global mlx_0_0 00:11:14.631 valid_lft forever preferred_lft forever 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:14.631 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:14.631 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:11:14.631 altname enp24s0f1np1 00:11:14.631 altname ens785f1np1 00:11:14.631 inet 192.168.100.9/24 scope global mlx_0_1 00:11:14.631 valid_lft forever preferred_lft forever 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # return 0 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:11:14.631 192.168.100.9' 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:11:14.631 192.168.100.9' 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # head -n 1 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:11:14.631 192.168.100.9' 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # tail -n +2 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # head -n 1 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # nvmfpid=3357017 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # waitforlisten 3357017 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 3357017 ']' 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:14.631 18:16:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:14.631 [2024-10-08 18:16:27.703754] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:11:14.631 [2024-10-08 18:16:27.703818] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:14.631 [2024-10-08 18:16:27.790225] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:14.891 [2024-10-08 18:16:27.881195] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:14.891 [2024-10-08 18:16:27.881237] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:14.891 [2024-10-08 18:16:27.881246] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:14.891 [2024-10-08 18:16:27.881254] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:14.891 [2024-10-08 18:16:27.881262] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:14.891 [2024-10-08 18:16:27.882753] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:14.891 [2024-10-08 18:16:27.882856] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:14.891 [2024-10-08 18:16:27.882956] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.891 [2024-10-08 18:16:27.882958] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:11:15.460 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:15.460 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:11:15.460 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:15.460 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:15.460 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.460 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:15.460 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:15.460 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.460 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.720 [2024-10-08 18:16:28.641753] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c7d2e0/0x1c817d0) succeed. 00:11:15.720 [2024-10-08 18:16:28.652265] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c7e920/0x1cc2e70) succeed. 00:11:15.720 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.721 Null1 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.721 [2024-10-08 18:16:28.821287] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.721 Null2 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.721 Null3 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.721 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.981 Null4 00:11:15.981 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.981 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:15.981 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.981 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.981 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.981 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:15.981 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.981 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.981 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.981 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:11:15.981 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.981 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.981 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.981 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:15.981 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.981 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.981 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.981 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:11:15.981 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.981 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.981 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.981 18:16:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:11:15.981 00:11:15.981 Discovery Log Number of Records 6, Generation counter 6 00:11:15.981 =====Discovery Log Entry 0====== 00:11:15.981 trtype: rdma 00:11:15.981 adrfam: ipv4 00:11:15.981 subtype: current discovery subsystem 00:11:15.981 treq: not required 00:11:15.981 portid: 0 00:11:15.981 trsvcid: 4420 00:11:15.981 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:15.981 traddr: 192.168.100.8 00:11:15.981 eflags: explicit discovery connections, duplicate discovery information 00:11:15.982 rdma_prtype: not specified 00:11:15.982 rdma_qptype: connected 00:11:15.982 rdma_cms: rdma-cm 00:11:15.982 rdma_pkey: 0x0000 00:11:15.982 =====Discovery Log Entry 1====== 00:11:15.982 trtype: rdma 00:11:15.982 adrfam: ipv4 00:11:15.982 subtype: nvme subsystem 00:11:15.982 treq: not required 00:11:15.982 portid: 0 00:11:15.982 trsvcid: 4420 00:11:15.982 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:15.982 traddr: 192.168.100.8 00:11:15.982 eflags: none 00:11:15.982 rdma_prtype: not specified 00:11:15.982 rdma_qptype: connected 00:11:15.982 rdma_cms: rdma-cm 00:11:15.982 rdma_pkey: 0x0000 00:11:15.982 =====Discovery Log Entry 2====== 00:11:15.982 trtype: rdma 00:11:15.982 adrfam: ipv4 00:11:15.982 subtype: nvme subsystem 00:11:15.982 treq: not required 00:11:15.982 portid: 0 00:11:15.982 trsvcid: 4420 00:11:15.982 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:15.982 traddr: 192.168.100.8 00:11:15.982 eflags: none 00:11:15.982 rdma_prtype: not specified 00:11:15.982 rdma_qptype: connected 00:11:15.982 rdma_cms: rdma-cm 00:11:15.982 rdma_pkey: 0x0000 00:11:15.982 =====Discovery Log Entry 3====== 00:11:15.982 trtype: rdma 00:11:15.982 adrfam: ipv4 00:11:15.982 subtype: nvme subsystem 00:11:15.982 treq: not required 00:11:15.982 portid: 0 00:11:15.982 trsvcid: 4420 00:11:15.982 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:15.982 traddr: 192.168.100.8 00:11:15.982 eflags: none 00:11:15.982 rdma_prtype: not specified 00:11:15.982 rdma_qptype: connected 00:11:15.982 rdma_cms: rdma-cm 00:11:15.982 rdma_pkey: 0x0000 00:11:15.982 =====Discovery Log Entry 4====== 00:11:15.982 trtype: rdma 00:11:15.982 adrfam: ipv4 00:11:15.982 subtype: nvme subsystem 00:11:15.982 treq: not required 00:11:15.982 portid: 0 00:11:15.982 trsvcid: 4420 00:11:15.982 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:15.982 traddr: 192.168.100.8 00:11:15.982 eflags: none 00:11:15.982 rdma_prtype: not specified 00:11:15.982 rdma_qptype: connected 00:11:15.982 rdma_cms: rdma-cm 00:11:15.982 rdma_pkey: 0x0000 00:11:15.982 =====Discovery Log Entry 5====== 00:11:15.982 trtype: rdma 00:11:15.982 adrfam: ipv4 00:11:15.982 subtype: discovery subsystem referral 00:11:15.982 treq: not required 00:11:15.982 portid: 0 00:11:15.982 trsvcid: 4430 00:11:15.982 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:15.982 traddr: 192.168.100.8 00:11:15.982 eflags: none 00:11:15.982 rdma_prtype: unrecognized 00:11:15.982 rdma_qptype: unrecognized 00:11:15.982 rdma_cms: unrecognized 00:11:15.982 rdma_pkey: 0x0000 00:11:15.982 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:15.982 Perform nvmf subsystem discovery via RPC 00:11:15.982 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:15.982 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.982 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.982 [ 00:11:15.982 { 00:11:15.982 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:15.982 "subtype": "Discovery", 00:11:15.982 "listen_addresses": [ 00:11:15.982 { 00:11:15.982 "trtype": "RDMA", 00:11:15.982 "adrfam": "IPv4", 00:11:15.982 "traddr": "192.168.100.8", 00:11:15.982 "trsvcid": "4420" 00:11:15.982 } 00:11:15.982 ], 00:11:15.982 "allow_any_host": true, 00:11:15.982 "hosts": [] 00:11:15.982 }, 00:11:15.982 { 00:11:15.982 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:15.982 "subtype": "NVMe", 00:11:15.982 "listen_addresses": [ 00:11:15.982 { 00:11:15.982 "trtype": "RDMA", 00:11:15.982 "adrfam": "IPv4", 00:11:15.982 "traddr": "192.168.100.8", 00:11:15.982 "trsvcid": "4420" 00:11:15.982 } 00:11:15.982 ], 00:11:15.982 "allow_any_host": true, 00:11:15.982 "hosts": [], 00:11:15.982 "serial_number": "SPDK00000000000001", 00:11:15.982 "model_number": "SPDK bdev Controller", 00:11:15.982 "max_namespaces": 32, 00:11:15.982 "min_cntlid": 1, 00:11:15.982 "max_cntlid": 65519, 00:11:15.982 "namespaces": [ 00:11:15.982 { 00:11:15.982 "nsid": 1, 00:11:15.982 "bdev_name": "Null1", 00:11:15.982 "name": "Null1", 00:11:15.982 "nguid": "6B9ED5F639D3442D8B18CF90CF535BBB", 00:11:15.982 "uuid": "6b9ed5f6-39d3-442d-8b18-cf90cf535bbb" 00:11:15.982 } 00:11:15.982 ] 00:11:15.982 }, 00:11:15.982 { 00:11:15.982 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:15.982 "subtype": "NVMe", 00:11:15.982 "listen_addresses": [ 00:11:15.982 { 00:11:15.982 "trtype": "RDMA", 00:11:15.982 "adrfam": "IPv4", 00:11:15.982 "traddr": "192.168.100.8", 00:11:15.982 "trsvcid": "4420" 00:11:15.982 } 00:11:15.982 ], 00:11:15.982 "allow_any_host": true, 00:11:15.982 "hosts": [], 00:11:15.982 "serial_number": "SPDK00000000000002", 00:11:15.982 "model_number": "SPDK bdev Controller", 00:11:15.982 "max_namespaces": 32, 00:11:15.982 "min_cntlid": 1, 00:11:15.982 "max_cntlid": 65519, 00:11:15.982 "namespaces": [ 00:11:15.982 { 00:11:15.982 "nsid": 1, 00:11:15.982 "bdev_name": "Null2", 00:11:15.982 "name": "Null2", 00:11:15.982 "nguid": "91338B4E377144B79873668DF2433F59", 00:11:15.982 "uuid": "91338b4e-3771-44b7-9873-668df2433f59" 00:11:15.982 } 00:11:15.982 ] 00:11:15.982 }, 00:11:15.982 { 00:11:15.982 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:15.982 "subtype": "NVMe", 00:11:15.982 "listen_addresses": [ 00:11:15.982 { 00:11:15.982 "trtype": "RDMA", 00:11:15.982 "adrfam": "IPv4", 00:11:15.982 "traddr": "192.168.100.8", 00:11:15.982 "trsvcid": "4420" 00:11:15.982 } 00:11:15.982 ], 00:11:15.982 "allow_any_host": true, 00:11:15.982 "hosts": [], 00:11:15.982 "serial_number": "SPDK00000000000003", 00:11:15.982 "model_number": "SPDK bdev Controller", 00:11:15.982 "max_namespaces": 32, 00:11:15.982 "min_cntlid": 1, 00:11:15.982 "max_cntlid": 65519, 00:11:15.982 "namespaces": [ 00:11:15.982 { 00:11:15.982 "nsid": 1, 00:11:15.982 "bdev_name": "Null3", 00:11:15.982 "name": "Null3", 00:11:15.982 "nguid": "A27F9AA2C4E24452A66158D1E4BBB86E", 00:11:15.982 "uuid": "a27f9aa2-c4e2-4452-a661-58d1e4bbb86e" 00:11:15.982 } 00:11:15.982 ] 00:11:15.982 }, 00:11:15.982 { 00:11:15.982 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:15.982 "subtype": "NVMe", 00:11:15.982 "listen_addresses": [ 00:11:15.982 { 00:11:15.982 "trtype": "RDMA", 00:11:15.982 "adrfam": "IPv4", 00:11:15.982 "traddr": "192.168.100.8", 00:11:15.982 "trsvcid": "4420" 00:11:15.982 } 00:11:15.982 ], 00:11:15.982 "allow_any_host": true, 00:11:15.982 "hosts": [], 00:11:15.982 "serial_number": "SPDK00000000000004", 00:11:15.982 "model_number": "SPDK bdev Controller", 00:11:15.982 "max_namespaces": 32, 00:11:15.982 "min_cntlid": 1, 00:11:15.982 "max_cntlid": 65519, 00:11:15.982 "namespaces": [ 00:11:15.982 { 00:11:15.982 "nsid": 1, 00:11:15.982 "bdev_name": "Null4", 00:11:15.982 "name": "Null4", 00:11:15.982 "nguid": "5B7B18984C2245548ED0754632655C11", 00:11:15.982 "uuid": "5b7b1898-4c22-4554-8ed0-754632655c11" 00:11:15.982 } 00:11:15.982 ] 00:11:15.982 } 00:11:15.982 ] 00:11:15.982 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.982 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:15.982 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:15.982 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:15.982 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.982 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:16.243 rmmod nvme_rdma 00:11:16.243 rmmod nvme_fabrics 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@515 -- # '[' -n 3357017 ']' 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # killprocess 3357017 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 3357017 ']' 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 3357017 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3357017 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3357017' 00:11:16.243 killing process with pid 3357017 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 3357017 00:11:16.243 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 3357017 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:11:16.815 00:11:16.815 real 0m9.004s 00:11:16.815 user 0m9.222s 00:11:16.815 sys 0m5.750s 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:16.815 ************************************ 00:11:16.815 END TEST nvmf_target_discovery 00:11:16.815 ************************************ 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:16.815 ************************************ 00:11:16.815 START TEST nvmf_referrals 00:11:16.815 ************************************ 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:11:16.815 * Looking for test storage... 00:11:16.815 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lcov --version 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:16.815 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:16.816 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:16.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.816 --rc genhtml_branch_coverage=1 00:11:16.816 --rc genhtml_function_coverage=1 00:11:16.816 --rc genhtml_legend=1 00:11:16.816 --rc geninfo_all_blocks=1 00:11:16.816 --rc geninfo_unexecuted_blocks=1 00:11:16.816 00:11:16.816 ' 00:11:16.816 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:16.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.816 --rc genhtml_branch_coverage=1 00:11:16.816 --rc genhtml_function_coverage=1 00:11:16.816 --rc genhtml_legend=1 00:11:16.816 --rc geninfo_all_blocks=1 00:11:16.816 --rc geninfo_unexecuted_blocks=1 00:11:16.816 00:11:16.816 ' 00:11:16.816 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:16.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.816 --rc genhtml_branch_coverage=1 00:11:16.816 --rc genhtml_function_coverage=1 00:11:16.816 --rc genhtml_legend=1 00:11:16.816 --rc geninfo_all_blocks=1 00:11:16.816 --rc geninfo_unexecuted_blocks=1 00:11:16.816 00:11:16.816 ' 00:11:16.816 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:16.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.816 --rc genhtml_branch_coverage=1 00:11:16.816 --rc genhtml_function_coverage=1 00:11:16.816 --rc genhtml_legend=1 00:11:16.816 --rc geninfo_all_blocks=1 00:11:16.816 --rc geninfo_unexecuted_blocks=1 00:11:16.816 00:11:16.816 ' 00:11:16.816 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:17.077 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:17.077 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:17.077 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:17.077 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:17.077 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:17.077 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:17.077 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:17.077 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:17.077 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:17.077 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:17.077 18:16:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:17.077 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:11:17.077 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:11:17.077 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:17.077 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:17.077 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:17.077 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:17.078 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:17.078 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:17.078 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.078 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.078 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.078 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.078 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.078 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.078 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:17.078 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.078 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:17.078 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:17.078 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:17.078 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:17.078 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:17.078 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:17.078 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:17.078 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:17.078 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:17.078 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:17.078 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:17.078 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:17.078 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:17.078 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:17.078 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:17.078 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:17.078 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:17.078 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:17.078 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:11:17.078 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:17.078 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:17.078 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:17.078 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:17.078 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.078 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.078 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.078 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:17.078 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:17.078 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:17.078 18:16:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:23.656 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:23.656 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:23.656 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:23.656 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:23.656 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:23.656 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:23.656 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:23.656 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:23.656 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:23.656 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:23.656 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:23.656 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:11:23.657 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:11:23.657 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:23.657 Found net devices under 0000:18:00.0: mlx_0_0 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:23.657 Found net devices under 0000:18:00.1: mlx_0_1 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # is_hw=yes 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # rdma_device_init 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # uname 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@528 -- # allocate_nic_ips 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:23.657 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:23.918 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:23.918 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:11:23.918 altname enp24s0f0np0 00:11:23.918 altname ens785f0np0 00:11:23.918 inet 192.168.100.8/24 scope global mlx_0_0 00:11:23.918 valid_lft forever preferred_lft forever 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:23.918 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:23.918 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:11:23.918 altname enp24s0f1np1 00:11:23.918 altname ens785f1np1 00:11:23.918 inet 192.168.100.9/24 scope global mlx_0_1 00:11:23.918 valid_lft forever preferred_lft forever 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # return 0 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:11:23.918 192.168.100.9' 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:11:23.918 192.168.100.9' 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # head -n 1 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:11:23.918 192.168.100.9' 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # tail -n +2 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # head -n 1 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # nvmfpid=3360144 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # waitforlisten 3360144 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 3360144 ']' 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:23.918 18:16:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:23.918 [2024-10-08 18:16:37.036233] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:11:23.918 [2024-10-08 18:16:37.036302] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:24.178 [2024-10-08 18:16:37.125025] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:24.178 [2024-10-08 18:16:37.219856] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:24.178 [2024-10-08 18:16:37.219896] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:24.178 [2024-10-08 18:16:37.219906] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:24.178 [2024-10-08 18:16:37.219915] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:24.178 [2024-10-08 18:16:37.219922] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:24.178 [2024-10-08 18:16:37.221231] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:24.178 [2024-10-08 18:16:37.221277] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:24.178 [2024-10-08 18:16:37.221376] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.178 [2024-10-08 18:16:37.221378] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:11:24.747 18:16:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:24.747 18:16:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:11:24.747 18:16:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:24.747 18:16:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:24.747 18:16:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.006 18:16:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:25.006 18:16:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:25.006 18:16:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.006 18:16:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.006 [2024-10-08 18:16:37.967512] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f452e0/0x1f497d0) succeed. 00:11:25.006 [2024-10-08 18:16:37.977978] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f46920/0x1f8ae70) succeed. 00:11:25.006 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.006 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:11:25.006 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.006 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.006 [2024-10-08 18:16:38.110537] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:11:25.006 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.006 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:11:25.006 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.006 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.006 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.006 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:11:25.006 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.006 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.006 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.006 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:11:25.006 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.006 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.006 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.007 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:25.007 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:25.007 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.007 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.007 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.267 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:25.267 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:25.267 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:25.267 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:25.267 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:25.267 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.267 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.267 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:25.267 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.267 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:25.267 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:25.267 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:25.267 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:25.267 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:25.267 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:25.267 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:25.267 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:25.267 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:25.267 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:25.267 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:11:25.267 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.267 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.267 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.267 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:11:25.267 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.267 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.267 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.267 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:11:25.267 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.267 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.267 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.267 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:25.267 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:25.267 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.267 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.267 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.527 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:25.527 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:25.527 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:25.527 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:25.527 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:25.527 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:25.527 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:25.527 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:25.527 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:25.527 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:11:25.527 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.527 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.527 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.527 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:25.527 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.527 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.527 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.527 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:25.527 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:25.527 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:25.527 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:25.527 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.527 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.527 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:25.527 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.789 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:25.789 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:25.789 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:25.789 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:25.789 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:25.789 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:25.789 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:25.789 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:25.789 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:25.789 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:25.789 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:25.789 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:25.789 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:25.789 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:25.789 18:16:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:26.049 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:26.049 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:26.049 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:26.049 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:26.049 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:26.049 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:26.049 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:26.049 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:26.049 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.049 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:26.049 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.049 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:26.049 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:26.049 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:26.049 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:26.049 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.049 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:26.049 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:26.049 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.308 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:26.308 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:26.308 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:26.308 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:26.308 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:26.308 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:26.308 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:26.308 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:26.308 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:26.308 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:26.308 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:26.308 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:26.308 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:26.308 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:26.308 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:26.567 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:26.567 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:26.567 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:26.567 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:26.567 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:26.567 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:26.567 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:26.567 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:26.567 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.567 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:26.567 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.567 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:26.567 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:26.567 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.567 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:26.567 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.826 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:26.826 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:26.826 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:26.826 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:26.826 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:26.826 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:26.826 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:26.826 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:26.826 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:26.826 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:26.826 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:26.826 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:26.826 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:26.826 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:26.826 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:26.826 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:26.826 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:26.826 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:26.826 rmmod nvme_rdma 00:11:26.826 rmmod nvme_fabrics 00:11:26.826 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:26.826 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:26.826 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:26.826 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@515 -- # '[' -n 3360144 ']' 00:11:26.826 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # killprocess 3360144 00:11:26.826 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 3360144 ']' 00:11:26.826 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 3360144 00:11:26.826 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:11:26.826 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:26.826 18:16:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3360144 00:11:27.086 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:27.086 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:27.086 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3360144' 00:11:27.086 killing process with pid 3360144 00:11:27.086 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 3360144 00:11:27.086 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 3360144 00:11:27.345 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:27.345 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:11:27.345 00:11:27.345 real 0m10.535s 00:11:27.345 user 0m15.161s 00:11:27.345 sys 0m6.616s 00:11:27.345 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:27.345 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:27.345 ************************************ 00:11:27.345 END TEST nvmf_referrals 00:11:27.345 ************************************ 00:11:27.345 18:16:40 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:11:27.345 18:16:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:27.345 18:16:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:27.345 18:16:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:27.345 ************************************ 00:11:27.345 START TEST nvmf_connect_disconnect 00:11:27.345 ************************************ 00:11:27.345 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:11:27.345 * Looking for test storage... 00:11:27.345 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:27.345 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:27.345 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:11:27.345 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:27.605 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:27.605 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:27.605 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:27.605 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:27.605 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:27.605 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:27.605 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:27.605 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:27.605 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:27.605 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:27.605 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:27.605 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:27.605 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:27.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.606 --rc genhtml_branch_coverage=1 00:11:27.606 --rc genhtml_function_coverage=1 00:11:27.606 --rc genhtml_legend=1 00:11:27.606 --rc geninfo_all_blocks=1 00:11:27.606 --rc geninfo_unexecuted_blocks=1 00:11:27.606 00:11:27.606 ' 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:27.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.606 --rc genhtml_branch_coverage=1 00:11:27.606 --rc genhtml_function_coverage=1 00:11:27.606 --rc genhtml_legend=1 00:11:27.606 --rc geninfo_all_blocks=1 00:11:27.606 --rc geninfo_unexecuted_blocks=1 00:11:27.606 00:11:27.606 ' 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:27.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.606 --rc genhtml_branch_coverage=1 00:11:27.606 --rc genhtml_function_coverage=1 00:11:27.606 --rc genhtml_legend=1 00:11:27.606 --rc geninfo_all_blocks=1 00:11:27.606 --rc geninfo_unexecuted_blocks=1 00:11:27.606 00:11:27.606 ' 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:27.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.606 --rc genhtml_branch_coverage=1 00:11:27.606 --rc genhtml_function_coverage=1 00:11:27.606 --rc genhtml_legend=1 00:11:27.606 --rc geninfo_all_blocks=1 00:11:27.606 --rc geninfo_unexecuted_blocks=1 00:11:27.606 00:11:27.606 ' 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:27.606 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:27.606 18:16:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:11:34.181 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:11:34.181 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:34.181 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:34.182 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:34.182 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:34.182 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:34.182 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:34.182 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:34.182 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:34.182 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.182 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:11:34.182 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:34.182 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.182 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:34.182 Found net devices under 0000:18:00.0: mlx_0_0 00:11:34.182 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.182 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:34.182 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.182 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:11:34.182 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:34.182 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.182 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:34.182 Found net devices under 0000:18:00.1: mlx_0_1 00:11:34.182 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.182 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:34.182 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:11:34.182 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:34.182 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:11:34.182 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:11:34.182 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # rdma_device_init 00:11:34.182 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:11:34.182 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # uname 00:11:34.182 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:34.182 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:34.182 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:34.442 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@528 -- # allocate_nic_ips 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:34.443 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:34.443 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:11:34.443 altname enp24s0f0np0 00:11:34.443 altname ens785f0np0 00:11:34.443 inet 192.168.100.8/24 scope global mlx_0_0 00:11:34.443 valid_lft forever preferred_lft forever 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:34.443 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:34.443 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:11:34.443 altname enp24s0f1np1 00:11:34.443 altname ens785f1np1 00:11:34.443 inet 192.168.100.9/24 scope global mlx_0_1 00:11:34.443 valid_lft forever preferred_lft forever 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # return 0 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:11:34.443 192.168.100.9' 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:11:34.443 192.168.100.9' 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # head -n 1 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:11:34.443 192.168.100.9' 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # tail -n +2 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # head -n 1 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # nvmfpid=3363682 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # waitforlisten 3363682 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 3363682 ']' 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.443 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:34.444 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.444 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:34.444 18:16:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:34.704 [2024-10-08 18:16:47.646387] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:11:34.704 [2024-10-08 18:16:47.646452] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.704 [2024-10-08 18:16:47.733946] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:34.704 [2024-10-08 18:16:47.822677] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:34.704 [2024-10-08 18:16:47.822723] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:34.704 [2024-10-08 18:16:47.822733] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:34.704 [2024-10-08 18:16:47.822748] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:34.704 [2024-10-08 18:16:47.822755] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:34.704 [2024-10-08 18:16:47.824095] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.704 [2024-10-08 18:16:47.824133] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:34.704 [2024-10-08 18:16:47.824233] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.704 [2024-10-08 18:16:47.824235] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:11:35.640 18:16:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:35.640 18:16:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:11:35.640 18:16:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:35.640 18:16:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:35.640 18:16:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.640 18:16:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:35.640 18:16:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:11:35.640 18:16:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.640 18:16:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.640 [2024-10-08 18:16:48.564199] rdma.c:2735:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:11:35.640 [2024-10-08 18:16:48.585890] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x15992e0/0x159d7d0) succeed. 00:11:35.640 [2024-10-08 18:16:48.596515] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x159a920/0x15dee70) succeed. 00:11:35.640 18:16:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.640 18:16:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:35.640 18:16:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.640 18:16:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.640 18:16:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.640 18:16:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:35.641 18:16:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:35.641 18:16:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.641 18:16:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.641 18:16:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.641 18:16:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:35.641 18:16:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.641 18:16:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.641 18:16:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.641 18:16:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:35.641 18:16:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.641 18:16:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.641 [2024-10-08 18:16:48.738027] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:35.641 18:16:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.641 18:16:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:35.641 18:16:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:35.641 18:16:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:39.836 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.377 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.863 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.863 18:17:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:55.863 18:17:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:55.863 18:17:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:55.863 18:17:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:55.863 18:17:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:55.863 18:17:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:55.863 18:17:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:55.863 18:17:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:55.863 18:17:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:55.863 rmmod nvme_rdma 00:11:55.863 rmmod nvme_fabrics 00:11:55.863 18:17:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:55.863 18:17:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:55.863 18:17:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:55.863 18:17:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@515 -- # '[' -n 3363682 ']' 00:11:55.863 18:17:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # killprocess 3363682 00:11:55.863 18:17:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 3363682 ']' 00:11:55.863 18:17:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 3363682 00:11:55.863 18:17:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:11:55.863 18:17:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:55.863 18:17:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3363682 00:11:55.863 18:17:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:55.863 18:17:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:55.863 18:17:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3363682' 00:11:55.863 killing process with pid 3363682 00:11:55.863 18:17:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 3363682 00:11:55.863 18:17:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 3363682 00:11:56.122 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:56.122 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:11:56.122 00:11:56.122 real 0m28.886s 00:11:56.122 user 1m27.457s 00:11:56.122 sys 0m6.776s 00:11:56.122 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:56.122 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:56.122 ************************************ 00:11:56.122 END TEST nvmf_connect_disconnect 00:11:56.122 ************************************ 00:11:56.382 18:17:09 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:11:56.382 18:17:09 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:56.382 18:17:09 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:56.382 18:17:09 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:56.382 ************************************ 00:11:56.382 START TEST nvmf_multitarget 00:11:56.382 ************************************ 00:11:56.382 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:11:56.382 * Looking for test storage... 00:11:56.382 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:56.382 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:56.382 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lcov --version 00:11:56.382 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:56.382 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:56.642 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:56.642 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:56.642 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:56.642 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:56.642 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:56.642 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:56.642 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:56.642 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:56.642 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:56.642 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:56.642 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:56.642 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:56.642 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:56.642 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:56.642 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:56.642 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:56.642 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:56.642 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:56.642 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:56.642 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:56.642 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:56.642 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:56.642 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:56.642 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:56.642 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:56.642 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:56.642 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:56.642 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:56.642 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:56.642 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:56.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.642 --rc genhtml_branch_coverage=1 00:11:56.642 --rc genhtml_function_coverage=1 00:11:56.642 --rc genhtml_legend=1 00:11:56.642 --rc geninfo_all_blocks=1 00:11:56.642 --rc geninfo_unexecuted_blocks=1 00:11:56.642 00:11:56.642 ' 00:11:56.642 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:56.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.642 --rc genhtml_branch_coverage=1 00:11:56.642 --rc genhtml_function_coverage=1 00:11:56.642 --rc genhtml_legend=1 00:11:56.642 --rc geninfo_all_blocks=1 00:11:56.642 --rc geninfo_unexecuted_blocks=1 00:11:56.642 00:11:56.642 ' 00:11:56.642 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:56.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.642 --rc genhtml_branch_coverage=1 00:11:56.642 --rc genhtml_function_coverage=1 00:11:56.642 --rc genhtml_legend=1 00:11:56.642 --rc geninfo_all_blocks=1 00:11:56.642 --rc geninfo_unexecuted_blocks=1 00:11:56.642 00:11:56.642 ' 00:11:56.642 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:56.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.642 --rc genhtml_branch_coverage=1 00:11:56.642 --rc genhtml_function_coverage=1 00:11:56.642 --rc genhtml_legend=1 00:11:56.642 --rc geninfo_all_blocks=1 00:11:56.642 --rc geninfo_unexecuted_blocks=1 00:11:56.642 00:11:56.642 ' 00:11:56.642 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:56.642 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:56.642 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:56.642 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:56.643 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:56.643 18:17:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:12:03.223 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:12:03.223 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:12:03.223 Found net devices under 0000:18:00.0: mlx_0_0 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:12:03.223 Found net devices under 0000:18:00.1: mlx_0_1 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # is_hw=yes 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # rdma_device_init 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # uname 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@528 -- # allocate_nic_ips 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:03.223 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:03.224 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:12:03.224 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:03.224 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:03.224 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:03.224 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:03.224 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:03.224 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:03.224 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:12:03.224 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:03.224 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:03.224 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:03.224 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:03.224 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:03.224 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:03.224 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:03.224 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:03.224 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:03.224 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:03.224 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:12:03.224 altname enp24s0f0np0 00:12:03.224 altname ens785f0np0 00:12:03.224 inet 192.168.100.8/24 scope global mlx_0_0 00:12:03.224 valid_lft forever preferred_lft forever 00:12:03.224 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:03.224 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:03.224 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:03.224 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:03.224 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:03.224 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:03.224 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:03.224 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:03.224 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:03.224 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:03.224 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:12:03.224 altname enp24s0f1np1 00:12:03.224 altname ens785f1np1 00:12:03.224 inet 192.168.100.9/24 scope global mlx_0_1 00:12:03.224 valid_lft forever preferred_lft forever 00:12:03.224 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # return 0 00:12:03.224 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:03.224 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:03.224 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:12:03.224 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:12:03.224 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:03.224 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:03.224 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:03.224 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:03.224 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:12:03.484 192.168.100.9' 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:12:03.484 192.168.100.9' 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # head -n 1 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:12:03.484 192.168.100.9' 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # tail -n +2 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # head -n 1 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # nvmfpid=3369447 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # waitforlisten 3369447 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 3369447 ']' 00:12:03.484 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.485 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:03.485 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.485 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:03.485 18:17:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:03.485 [2024-10-08 18:17:16.553895] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:12:03.485 [2024-10-08 18:17:16.553957] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:03.485 [2024-10-08 18:17:16.625211] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:03.744 [2024-10-08 18:17:16.715849] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:03.745 [2024-10-08 18:17:16.715899] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:03.745 [2024-10-08 18:17:16.715910] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:03.745 [2024-10-08 18:17:16.715919] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:03.745 [2024-10-08 18:17:16.715927] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:03.745 [2024-10-08 18:17:16.721046] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:03.745 [2024-10-08 18:17:16.721088] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:12:03.745 [2024-10-08 18:17:16.721194] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.745 [2024-10-08 18:17:16.721195] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:12:04.314 18:17:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:04.314 18:17:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:12:04.314 18:17:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:04.314 18:17:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:04.314 18:17:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:04.314 18:17:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:04.314 18:17:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:04.314 18:17:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:04.314 18:17:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:04.572 18:17:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:04.572 18:17:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:04.572 "nvmf_tgt_1" 00:12:04.572 18:17:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:04.833 "nvmf_tgt_2" 00:12:04.833 18:17:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:04.833 18:17:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:04.833 18:17:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:04.833 18:17:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:05.092 true 00:12:05.092 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:05.092 true 00:12:05.092 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:05.092 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:05.092 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:05.092 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:05.092 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:05.092 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:05.092 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:05.354 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:05.354 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:05.354 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:05.354 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:05.354 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:05.354 rmmod nvme_rdma 00:12:05.354 rmmod nvme_fabrics 00:12:05.354 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:05.354 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:05.354 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:05.354 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@515 -- # '[' -n 3369447 ']' 00:12:05.354 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # killprocess 3369447 00:12:05.354 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 3369447 ']' 00:12:05.354 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 3369447 00:12:05.354 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:12:05.354 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:05.354 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3369447 00:12:05.354 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:05.354 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:05.354 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3369447' 00:12:05.354 killing process with pid 3369447 00:12:05.354 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 3369447 00:12:05.354 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 3369447 00:12:05.613 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:05.613 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:12:05.613 00:12:05.613 real 0m9.176s 00:12:05.613 user 0m10.189s 00:12:05.613 sys 0m5.806s 00:12:05.613 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:05.613 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:05.613 ************************************ 00:12:05.613 END TEST nvmf_multitarget 00:12:05.613 ************************************ 00:12:05.613 18:17:18 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:12:05.613 18:17:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:05.613 18:17:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:05.613 18:17:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:05.613 ************************************ 00:12:05.613 START TEST nvmf_rpc 00:12:05.613 ************************************ 00:12:05.613 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:12:05.613 * Looking for test storage... 00:12:05.613 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:05.613 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:05.613 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:12:05.613 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:05.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.873 --rc genhtml_branch_coverage=1 00:12:05.873 --rc genhtml_function_coverage=1 00:12:05.873 --rc genhtml_legend=1 00:12:05.873 --rc geninfo_all_blocks=1 00:12:05.873 --rc geninfo_unexecuted_blocks=1 00:12:05.873 00:12:05.873 ' 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:05.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.873 --rc genhtml_branch_coverage=1 00:12:05.873 --rc genhtml_function_coverage=1 00:12:05.873 --rc genhtml_legend=1 00:12:05.873 --rc geninfo_all_blocks=1 00:12:05.873 --rc geninfo_unexecuted_blocks=1 00:12:05.873 00:12:05.873 ' 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:05.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.873 --rc genhtml_branch_coverage=1 00:12:05.873 --rc genhtml_function_coverage=1 00:12:05.873 --rc genhtml_legend=1 00:12:05.873 --rc geninfo_all_blocks=1 00:12:05.873 --rc geninfo_unexecuted_blocks=1 00:12:05.873 00:12:05.873 ' 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:05.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.873 --rc genhtml_branch_coverage=1 00:12:05.873 --rc genhtml_function_coverage=1 00:12:05.873 --rc genhtml_legend=1 00:12:05.873 --rc geninfo_all_blocks=1 00:12:05.873 --rc geninfo_unexecuted_blocks=1 00:12:05.873 00:12:05.873 ' 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.873 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.874 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.874 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.874 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:05.874 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.874 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:05.874 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:05.874 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:05.874 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:05.874 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.874 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.874 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:05.874 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:05.874 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:05.874 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:05.874 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:05.874 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:05.874 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:05.874 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:12:05.874 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:05.874 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:05.874 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:05.874 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:05.874 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.874 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.874 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.874 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:05.874 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:05.874 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:05.874 18:17:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:12:12.447 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:12:12.447 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:12:12.447 Found net devices under 0000:18:00.0: mlx_0_0 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:12:12.447 Found net devices under 0000:18:00.1: mlx_0_1 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # is_hw=yes 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # rdma_device_init 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # uname 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:12.447 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@528 -- # allocate_nic_ips 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:12.708 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:12.708 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:12:12.708 altname enp24s0f0np0 00:12:12.708 altname ens785f0np0 00:12:12.708 inet 192.168.100.8/24 scope global mlx_0_0 00:12:12.708 valid_lft forever preferred_lft forever 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:12.708 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:12.708 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:12:12.708 altname enp24s0f1np1 00:12:12.708 altname ens785f1np1 00:12:12.708 inet 192.168.100.9/24 scope global mlx_0_1 00:12:12.708 valid_lft forever preferred_lft forever 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # return 0 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:12:12.708 192.168.100.9' 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:12:12.708 192.168.100.9' 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # head -n 1 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:12:12.708 192.168.100.9' 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # tail -n +2 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # head -n 1 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:12:12.708 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:12:12.709 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:12:12.709 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:12.709 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:12.709 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:12.709 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.709 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # nvmfpid=3372743 00:12:12.709 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:12.709 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # waitforlisten 3372743 00:12:12.709 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 3372743 ']' 00:12:12.709 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.709 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:12.709 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.709 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:12.709 18:17:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.709 [2024-10-08 18:17:25.867650] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:12:12.709 [2024-10-08 18:17:25.867710] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.968 [2024-10-08 18:17:25.951345] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:12.969 [2024-10-08 18:17:26.042757] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:12.969 [2024-10-08 18:17:26.042799] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:12.969 [2024-10-08 18:17:26.042808] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:12.969 [2024-10-08 18:17:26.042816] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:12.969 [2024-10-08 18:17:26.042823] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:12.969 [2024-10-08 18:17:26.044186] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.969 [2024-10-08 18:17:26.044290] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:12:12.969 [2024-10-08 18:17:26.044390] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.969 [2024-10-08 18:17:26.044391] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:12:13.908 18:17:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:13.908 18:17:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:12:13.908 18:17:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:13.908 18:17:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:13.908 18:17:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.908 18:17:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:13.908 18:17:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:13.908 18:17:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.908 18:17:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.908 18:17:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.908 18:17:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:13.908 "tick_rate": 2300000000, 00:12:13.908 "poll_groups": [ 00:12:13.908 { 00:12:13.908 "name": "nvmf_tgt_poll_group_000", 00:12:13.908 "admin_qpairs": 0, 00:12:13.908 "io_qpairs": 0, 00:12:13.908 "current_admin_qpairs": 0, 00:12:13.908 "current_io_qpairs": 0, 00:12:13.908 "pending_bdev_io": 0, 00:12:13.908 "completed_nvme_io": 0, 00:12:13.908 "transports": [] 00:12:13.908 }, 00:12:13.908 { 00:12:13.908 "name": "nvmf_tgt_poll_group_001", 00:12:13.908 "admin_qpairs": 0, 00:12:13.908 "io_qpairs": 0, 00:12:13.908 "current_admin_qpairs": 0, 00:12:13.908 "current_io_qpairs": 0, 00:12:13.908 "pending_bdev_io": 0, 00:12:13.908 "completed_nvme_io": 0, 00:12:13.908 "transports": [] 00:12:13.908 }, 00:12:13.908 { 00:12:13.908 "name": "nvmf_tgt_poll_group_002", 00:12:13.908 "admin_qpairs": 0, 00:12:13.908 "io_qpairs": 0, 00:12:13.908 "current_admin_qpairs": 0, 00:12:13.908 "current_io_qpairs": 0, 00:12:13.908 "pending_bdev_io": 0, 00:12:13.908 "completed_nvme_io": 0, 00:12:13.908 "transports": [] 00:12:13.908 }, 00:12:13.908 { 00:12:13.908 "name": "nvmf_tgt_poll_group_003", 00:12:13.908 "admin_qpairs": 0, 00:12:13.908 "io_qpairs": 0, 00:12:13.908 "current_admin_qpairs": 0, 00:12:13.908 "current_io_qpairs": 0, 00:12:13.908 "pending_bdev_io": 0, 00:12:13.908 "completed_nvme_io": 0, 00:12:13.908 "transports": [] 00:12:13.908 } 00:12:13.908 ] 00:12:13.908 }' 00:12:13.908 18:17:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:13.908 18:17:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:13.908 18:17:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:13.908 18:17:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:13.908 18:17:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:13.908 18:17:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:13.908 18:17:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:13.908 18:17:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:13.908 18:17:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.908 18:17:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.908 [2024-10-08 18:17:26.911457] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb8e340/0xb92830) succeed. 00:12:13.908 [2024-10-08 18:17:26.922061] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb8f980/0xbd3ed0) succeed. 00:12:13.908 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.908 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:13.908 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.908 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.168 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.168 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:14.168 "tick_rate": 2300000000, 00:12:14.168 "poll_groups": [ 00:12:14.168 { 00:12:14.168 "name": "nvmf_tgt_poll_group_000", 00:12:14.168 "admin_qpairs": 0, 00:12:14.168 "io_qpairs": 0, 00:12:14.168 "current_admin_qpairs": 0, 00:12:14.168 "current_io_qpairs": 0, 00:12:14.168 "pending_bdev_io": 0, 00:12:14.168 "completed_nvme_io": 0, 00:12:14.168 "transports": [ 00:12:14.168 { 00:12:14.168 "trtype": "RDMA", 00:12:14.168 "pending_data_buffer": 0, 00:12:14.168 "devices": [ 00:12:14.168 { 00:12:14.168 "name": "mlx5_0", 00:12:14.168 "polls": 15942, 00:12:14.168 "idle_polls": 15942, 00:12:14.168 "completions": 0, 00:12:14.168 "requests": 0, 00:12:14.168 "request_latency": 0, 00:12:14.168 "pending_free_request": 0, 00:12:14.168 "pending_rdma_read": 0, 00:12:14.168 "pending_rdma_write": 0, 00:12:14.168 "pending_rdma_send": 0, 00:12:14.168 "total_send_wrs": 0, 00:12:14.168 "send_doorbell_updates": 0, 00:12:14.168 "total_recv_wrs": 4096, 00:12:14.168 "recv_doorbell_updates": 1 00:12:14.168 }, 00:12:14.168 { 00:12:14.168 "name": "mlx5_1", 00:12:14.168 "polls": 15942, 00:12:14.168 "idle_polls": 15942, 00:12:14.168 "completions": 0, 00:12:14.168 "requests": 0, 00:12:14.168 "request_latency": 0, 00:12:14.168 "pending_free_request": 0, 00:12:14.168 "pending_rdma_read": 0, 00:12:14.168 "pending_rdma_write": 0, 00:12:14.168 "pending_rdma_send": 0, 00:12:14.168 "total_send_wrs": 0, 00:12:14.168 "send_doorbell_updates": 0, 00:12:14.168 "total_recv_wrs": 4096, 00:12:14.168 "recv_doorbell_updates": 1 00:12:14.168 } 00:12:14.168 ] 00:12:14.168 } 00:12:14.168 ] 00:12:14.168 }, 00:12:14.168 { 00:12:14.168 "name": "nvmf_tgt_poll_group_001", 00:12:14.168 "admin_qpairs": 0, 00:12:14.168 "io_qpairs": 0, 00:12:14.168 "current_admin_qpairs": 0, 00:12:14.168 "current_io_qpairs": 0, 00:12:14.168 "pending_bdev_io": 0, 00:12:14.168 "completed_nvme_io": 0, 00:12:14.168 "transports": [ 00:12:14.168 { 00:12:14.168 "trtype": "RDMA", 00:12:14.168 "pending_data_buffer": 0, 00:12:14.168 "devices": [ 00:12:14.168 { 00:12:14.168 "name": "mlx5_0", 00:12:14.168 "polls": 10350, 00:12:14.168 "idle_polls": 10350, 00:12:14.168 "completions": 0, 00:12:14.168 "requests": 0, 00:12:14.168 "request_latency": 0, 00:12:14.169 "pending_free_request": 0, 00:12:14.169 "pending_rdma_read": 0, 00:12:14.169 "pending_rdma_write": 0, 00:12:14.169 "pending_rdma_send": 0, 00:12:14.169 "total_send_wrs": 0, 00:12:14.169 "send_doorbell_updates": 0, 00:12:14.169 "total_recv_wrs": 4096, 00:12:14.169 "recv_doorbell_updates": 1 00:12:14.169 }, 00:12:14.169 { 00:12:14.169 "name": "mlx5_1", 00:12:14.169 "polls": 10350, 00:12:14.169 "idle_polls": 10350, 00:12:14.169 "completions": 0, 00:12:14.169 "requests": 0, 00:12:14.169 "request_latency": 0, 00:12:14.169 "pending_free_request": 0, 00:12:14.169 "pending_rdma_read": 0, 00:12:14.169 "pending_rdma_write": 0, 00:12:14.169 "pending_rdma_send": 0, 00:12:14.169 "total_send_wrs": 0, 00:12:14.169 "send_doorbell_updates": 0, 00:12:14.169 "total_recv_wrs": 4096, 00:12:14.169 "recv_doorbell_updates": 1 00:12:14.169 } 00:12:14.169 ] 00:12:14.169 } 00:12:14.169 ] 00:12:14.169 }, 00:12:14.169 { 00:12:14.169 "name": "nvmf_tgt_poll_group_002", 00:12:14.169 "admin_qpairs": 0, 00:12:14.169 "io_qpairs": 0, 00:12:14.169 "current_admin_qpairs": 0, 00:12:14.169 "current_io_qpairs": 0, 00:12:14.169 "pending_bdev_io": 0, 00:12:14.169 "completed_nvme_io": 0, 00:12:14.169 "transports": [ 00:12:14.169 { 00:12:14.169 "trtype": "RDMA", 00:12:14.169 "pending_data_buffer": 0, 00:12:14.169 "devices": [ 00:12:14.169 { 00:12:14.169 "name": "mlx5_0", 00:12:14.169 "polls": 5642, 00:12:14.169 "idle_polls": 5642, 00:12:14.169 "completions": 0, 00:12:14.169 "requests": 0, 00:12:14.169 "request_latency": 0, 00:12:14.169 "pending_free_request": 0, 00:12:14.169 "pending_rdma_read": 0, 00:12:14.169 "pending_rdma_write": 0, 00:12:14.169 "pending_rdma_send": 0, 00:12:14.169 "total_send_wrs": 0, 00:12:14.169 "send_doorbell_updates": 0, 00:12:14.169 "total_recv_wrs": 4096, 00:12:14.169 "recv_doorbell_updates": 1 00:12:14.169 }, 00:12:14.169 { 00:12:14.169 "name": "mlx5_1", 00:12:14.169 "polls": 5642, 00:12:14.169 "idle_polls": 5642, 00:12:14.169 "completions": 0, 00:12:14.169 "requests": 0, 00:12:14.169 "request_latency": 0, 00:12:14.169 "pending_free_request": 0, 00:12:14.169 "pending_rdma_read": 0, 00:12:14.169 "pending_rdma_write": 0, 00:12:14.169 "pending_rdma_send": 0, 00:12:14.169 "total_send_wrs": 0, 00:12:14.169 "send_doorbell_updates": 0, 00:12:14.169 "total_recv_wrs": 4096, 00:12:14.169 "recv_doorbell_updates": 1 00:12:14.169 } 00:12:14.169 ] 00:12:14.169 } 00:12:14.169 ] 00:12:14.169 }, 00:12:14.169 { 00:12:14.169 "name": "nvmf_tgt_poll_group_003", 00:12:14.169 "admin_qpairs": 0, 00:12:14.169 "io_qpairs": 0, 00:12:14.169 "current_admin_qpairs": 0, 00:12:14.169 "current_io_qpairs": 0, 00:12:14.169 "pending_bdev_io": 0, 00:12:14.169 "completed_nvme_io": 0, 00:12:14.169 "transports": [ 00:12:14.169 { 00:12:14.169 "trtype": "RDMA", 00:12:14.169 "pending_data_buffer": 0, 00:12:14.169 "devices": [ 00:12:14.169 { 00:12:14.169 "name": "mlx5_0", 00:12:14.169 "polls": 907, 00:12:14.169 "idle_polls": 907, 00:12:14.169 "completions": 0, 00:12:14.169 "requests": 0, 00:12:14.169 "request_latency": 0, 00:12:14.169 "pending_free_request": 0, 00:12:14.169 "pending_rdma_read": 0, 00:12:14.169 "pending_rdma_write": 0, 00:12:14.169 "pending_rdma_send": 0, 00:12:14.169 "total_send_wrs": 0, 00:12:14.169 "send_doorbell_updates": 0, 00:12:14.169 "total_recv_wrs": 4096, 00:12:14.169 "recv_doorbell_updates": 1 00:12:14.169 }, 00:12:14.169 { 00:12:14.169 "name": "mlx5_1", 00:12:14.169 "polls": 907, 00:12:14.169 "idle_polls": 907, 00:12:14.169 "completions": 0, 00:12:14.169 "requests": 0, 00:12:14.169 "request_latency": 0, 00:12:14.169 "pending_free_request": 0, 00:12:14.169 "pending_rdma_read": 0, 00:12:14.169 "pending_rdma_write": 0, 00:12:14.169 "pending_rdma_send": 0, 00:12:14.169 "total_send_wrs": 0, 00:12:14.169 "send_doorbell_updates": 0, 00:12:14.169 "total_recv_wrs": 4096, 00:12:14.169 "recv_doorbell_updates": 1 00:12:14.169 } 00:12:14.169 ] 00:12:14.169 } 00:12:14.169 ] 00:12:14.169 } 00:12:14.169 ] 00:12:14.169 }' 00:12:14.169 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:14.169 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:14.169 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:14.169 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:14.169 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:14.169 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:14.169 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:14.169 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:14.169 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:14.169 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:14.169 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:12:14.169 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:12:14.169 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:12:14.169 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:12:14.169 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:14.169 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:12:14.169 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:12:14.169 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:12:14.169 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:12:14.169 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:12:14.169 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:12:14.169 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:14.169 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:12:14.169 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:12:14.169 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:14.169 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:14.169 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:14.169 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.169 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.169 Malloc1 00:12:14.169 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.169 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:14.169 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.169 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.169 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.169 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:14.169 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.169 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.429 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.429 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:14.429 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.429 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.429 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.429 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:14.429 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.429 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.429 [2024-10-08 18:17:27.355310] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:14.429 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.430 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:12:14.430 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:14.430 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:12:14.430 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:14.430 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:14.430 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:14.430 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:14.430 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:14.430 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:14.430 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:14.430 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:14.430 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:12:14.430 [2024-10-08 18:17:27.401309] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562' 00:12:14.430 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:14.430 could not add new controller: failed to write to nvme-fabrics device 00:12:14.430 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:14.430 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:14.430 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:14.430 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:14.430 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:12:14.430 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.430 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.430 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.430 18:17:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:15.370 18:17:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:15.370 18:17:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:15.370 18:17:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:15.371 18:17:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:15.371 18:17:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:17.909 18:17:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:17.909 18:17:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:17.909 18:17:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:17.909 18:17:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:17.909 18:17:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:17.909 18:17:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:17.909 18:17:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:18.477 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.477 18:17:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:18.477 18:17:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:18.477 18:17:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:18.477 18:17:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:18.478 18:17:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:18.478 18:17:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:18.478 18:17:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:18.478 18:17:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:12:18.478 18:17:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.478 18:17:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.478 18:17:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.478 18:17:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:18.478 18:17:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:18.478 18:17:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:18.478 18:17:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:18.478 18:17:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:18.478 18:17:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:18.478 18:17:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:18.478 18:17:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:18.478 18:17:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:18.478 18:17:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:18.478 18:17:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:18.478 18:17:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:18.478 [2024-10-08 18:17:31.603422] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562' 00:12:18.737 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:18.737 could not add new controller: failed to write to nvme-fabrics device 00:12:18.737 18:17:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:18.737 18:17:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:18.737 18:17:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:18.737 18:17:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:18.737 18:17:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:18.737 18:17:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.737 18:17:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.737 18:17:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.737 18:17:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:19.675 18:17:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:19.675 18:17:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:19.675 18:17:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:19.675 18:17:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:19.675 18:17:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:21.582 18:17:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:21.582 18:17:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:21.582 18:17:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:21.582 18:17:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:21.582 18:17:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:21.582 18:17:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:21.582 18:17:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:22.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.963 18:17:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:22.963 18:17:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:22.963 18:17:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:22.963 18:17:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:22.963 18:17:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:22.963 18:17:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:22.963 18:17:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:22.963 18:17:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:22.963 18:17:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.963 18:17:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.963 18:17:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.963 18:17:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:22.963 18:17:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:22.963 18:17:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:22.963 18:17:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.963 18:17:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.963 18:17:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.963 18:17:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:22.963 18:17:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.963 18:17:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.963 [2024-10-08 18:17:35.785889] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:22.963 18:17:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.963 18:17:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:22.963 18:17:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.963 18:17:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.963 18:17:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.963 18:17:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:22.963 18:17:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.963 18:17:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.963 18:17:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.963 18:17:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:23.902 18:17:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:23.902 18:17:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:23.902 18:17:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:23.902 18:17:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:23.902 18:17:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:25.818 18:17:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:25.818 18:17:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:25.818 18:17:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:25.818 18:17:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:25.818 18:17:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:25.818 18:17:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:25.818 18:17:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:26.759 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.759 18:17:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:26.759 18:17:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:26.759 18:17:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:26.759 18:17:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.759 18:17:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:26.759 18:17:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.759 18:17:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:26.759 18:17:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:26.759 18:17:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.759 18:17:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.759 18:17:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.759 18:17:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.759 18:17:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.759 18:17:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.759 18:17:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.759 18:17:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:26.759 18:17:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:26.759 18:17:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.759 18:17:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.759 18:17:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.759 18:17:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:26.759 18:17:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.759 18:17:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.759 [2024-10-08 18:17:39.898233] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:26.759 18:17:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.759 18:17:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:26.759 18:17:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.759 18:17:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.759 18:17:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.760 18:17:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:26.760 18:17:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.760 18:17:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.760 18:17:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.760 18:17:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:28.141 18:17:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:28.141 18:17:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:28.141 18:17:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:28.141 18:17:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:28.141 18:17:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:30.049 18:17:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:30.049 18:17:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:30.049 18:17:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:30.049 18:17:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:30.049 18:17:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:30.049 18:17:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:30.049 18:17:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:30.989 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.989 18:17:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:30.989 18:17:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:30.989 18:17:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:30.989 18:17:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.989 18:17:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:30.989 18:17:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.989 18:17:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:30.989 18:17:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:30.989 18:17:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.989 18:17:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.989 18:17:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.989 18:17:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:30.989 18:17:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.989 18:17:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.989 18:17:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.989 18:17:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:30.989 18:17:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:30.989 18:17:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.989 18:17:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.989 18:17:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.989 18:17:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:30.989 18:17:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.989 18:17:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.989 [2024-10-08 18:17:43.979521] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:30.989 18:17:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.989 18:17:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:30.989 18:17:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.989 18:17:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.989 18:17:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.989 18:17:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:30.989 18:17:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.989 18:17:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.989 18:17:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.989 18:17:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:32.370 18:17:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:32.370 18:17:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:32.370 18:17:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:32.370 18:17:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:32.370 18:17:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:34.280 18:17:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:34.280 18:17:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:34.280 18:17:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:34.280 18:17:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:34.280 18:17:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:34.280 18:17:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:34.280 18:17:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:35.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.223 18:17:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:35.223 18:17:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:35.223 18:17:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:35.223 18:17:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.223 18:17:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:35.223 18:17:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.223 18:17:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:35.223 18:17:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:35.223 18:17:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.223 18:17:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.223 18:17:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.223 18:17:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:35.223 18:17:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.223 18:17:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.223 18:17:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.223 18:17:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:35.223 18:17:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:35.223 18:17:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.223 18:17:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.223 18:17:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.223 18:17:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:35.223 18:17:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.223 18:17:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.223 [2024-10-08 18:17:48.378342] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:35.223 18:17:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.223 18:17:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:35.223 18:17:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.223 18:17:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.223 18:17:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.223 18:17:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:35.223 18:17:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.223 18:17:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.483 18:17:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.483 18:17:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:36.478 18:17:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:36.478 18:17:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:36.478 18:17:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:36.478 18:17:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:36.738 18:17:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:38.648 18:17:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:38.648 18:17:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:38.648 18:17:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:38.648 18:17:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:38.648 18:17:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:38.648 18:17:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:38.648 18:17:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:39.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.588 18:17:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:39.588 18:17:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:39.588 18:17:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:39.588 18:17:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:39.588 18:17:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:39.588 18:17:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:39.588 18:17:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:39.588 18:17:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:39.588 18:17:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.588 18:17:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.588 18:17:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.588 18:17:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:39.588 18:17:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.588 18:17:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.588 18:17:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.588 18:17:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:39.588 18:17:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:39.588 18:17:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.588 18:17:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.848 18:17:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.848 18:17:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:39.848 18:17:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.848 18:17:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.848 [2024-10-08 18:17:52.770388] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:39.848 18:17:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.848 18:17:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:39.848 18:17:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.848 18:17:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.848 18:17:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.848 18:17:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:39.848 18:17:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.848 18:17:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.848 18:17:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.848 18:17:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:40.870 18:17:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:40.870 18:17:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:41.131 18:17:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:41.131 18:17:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:41.131 18:17:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:43.040 18:17:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:43.040 18:17:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:43.040 18:17:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:43.041 18:17:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:43.041 18:17:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.041 18:17:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:43.041 18:17:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:44.138 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.138 [2024-10-08 18:17:57.154977] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.138 [2024-10-08 18:17:57.203378] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.138 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.139 [2024-10-08 18:17:57.251583] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.139 [2024-10-08 18:17:57.299759] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.139 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.397 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.397 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:44.397 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.397 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.397 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.397 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.397 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.397 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.397 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.397 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:44.397 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.397 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.397 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.397 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:44.397 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:44.397 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.397 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.397 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.397 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:44.397 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.397 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.397 [2024-10-08 18:17:57.347903] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:44.397 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.397 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:44.397 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.397 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.397 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.397 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:44.397 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.397 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.397 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.397 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.397 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.397 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.397 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.397 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:44.397 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.397 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.397 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.397 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:44.398 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.398 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.398 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.398 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:44.398 "tick_rate": 2300000000, 00:12:44.398 "poll_groups": [ 00:12:44.398 { 00:12:44.398 "name": "nvmf_tgt_poll_group_000", 00:12:44.398 "admin_qpairs": 2, 00:12:44.398 "io_qpairs": 27, 00:12:44.398 "current_admin_qpairs": 0, 00:12:44.398 "current_io_qpairs": 0, 00:12:44.398 "pending_bdev_io": 0, 00:12:44.398 "completed_nvme_io": 128, 00:12:44.398 "transports": [ 00:12:44.398 { 00:12:44.398 "trtype": "RDMA", 00:12:44.398 "pending_data_buffer": 0, 00:12:44.398 "devices": [ 00:12:44.398 { 00:12:44.398 "name": "mlx5_0", 00:12:44.398 "polls": 3714871, 00:12:44.398 "idle_polls": 3714540, 00:12:44.398 "completions": 371, 00:12:44.398 "requests": 185, 00:12:44.398 "request_latency": 34464430, 00:12:44.398 "pending_free_request": 0, 00:12:44.398 "pending_rdma_read": 0, 00:12:44.398 "pending_rdma_write": 0, 00:12:44.398 "pending_rdma_send": 0, 00:12:44.398 "total_send_wrs": 313, 00:12:44.398 "send_doorbell_updates": 164, 00:12:44.398 "total_recv_wrs": 4281, 00:12:44.398 "recv_doorbell_updates": 164 00:12:44.398 }, 00:12:44.398 { 00:12:44.398 "name": "mlx5_1", 00:12:44.398 "polls": 3714871, 00:12:44.398 "idle_polls": 3714871, 00:12:44.398 "completions": 0, 00:12:44.398 "requests": 0, 00:12:44.398 "request_latency": 0, 00:12:44.398 "pending_free_request": 0, 00:12:44.398 "pending_rdma_read": 0, 00:12:44.398 "pending_rdma_write": 0, 00:12:44.398 "pending_rdma_send": 0, 00:12:44.398 "total_send_wrs": 0, 00:12:44.398 "send_doorbell_updates": 0, 00:12:44.398 "total_recv_wrs": 4096, 00:12:44.398 "recv_doorbell_updates": 1 00:12:44.398 } 00:12:44.398 ] 00:12:44.398 } 00:12:44.398 ] 00:12:44.398 }, 00:12:44.398 { 00:12:44.398 "name": "nvmf_tgt_poll_group_001", 00:12:44.398 "admin_qpairs": 2, 00:12:44.398 "io_qpairs": 26, 00:12:44.398 "current_admin_qpairs": 0, 00:12:44.398 "current_io_qpairs": 0, 00:12:44.398 "pending_bdev_io": 0, 00:12:44.398 "completed_nvme_io": 126, 00:12:44.398 "transports": [ 00:12:44.398 { 00:12:44.398 "trtype": "RDMA", 00:12:44.398 "pending_data_buffer": 0, 00:12:44.398 "devices": [ 00:12:44.398 { 00:12:44.398 "name": "mlx5_0", 00:12:44.398 "polls": 3832769, 00:12:44.398 "idle_polls": 3832449, 00:12:44.398 "completions": 364, 00:12:44.398 "requests": 182, 00:12:44.398 "request_latency": 36018964, 00:12:44.398 "pending_free_request": 0, 00:12:44.398 "pending_rdma_read": 0, 00:12:44.398 "pending_rdma_write": 0, 00:12:44.398 "pending_rdma_send": 0, 00:12:44.398 "total_send_wrs": 308, 00:12:44.398 "send_doorbell_updates": 159, 00:12:44.398 "total_recv_wrs": 4278, 00:12:44.398 "recv_doorbell_updates": 160 00:12:44.398 }, 00:12:44.398 { 00:12:44.398 "name": "mlx5_1", 00:12:44.398 "polls": 3832769, 00:12:44.398 "idle_polls": 3832769, 00:12:44.398 "completions": 0, 00:12:44.398 "requests": 0, 00:12:44.398 "request_latency": 0, 00:12:44.398 "pending_free_request": 0, 00:12:44.398 "pending_rdma_read": 0, 00:12:44.398 "pending_rdma_write": 0, 00:12:44.398 "pending_rdma_send": 0, 00:12:44.398 "total_send_wrs": 0, 00:12:44.398 "send_doorbell_updates": 0, 00:12:44.398 "total_recv_wrs": 4096, 00:12:44.398 "recv_doorbell_updates": 1 00:12:44.398 } 00:12:44.398 ] 00:12:44.398 } 00:12:44.398 ] 00:12:44.398 }, 00:12:44.398 { 00:12:44.398 "name": "nvmf_tgt_poll_group_002", 00:12:44.398 "admin_qpairs": 1, 00:12:44.398 "io_qpairs": 26, 00:12:44.398 "current_admin_qpairs": 0, 00:12:44.398 "current_io_qpairs": 0, 00:12:44.398 "pending_bdev_io": 0, 00:12:44.398 "completed_nvme_io": 126, 00:12:44.398 "transports": [ 00:12:44.398 { 00:12:44.398 "trtype": "RDMA", 00:12:44.398 "pending_data_buffer": 0, 00:12:44.398 "devices": [ 00:12:44.398 { 00:12:44.398 "name": "mlx5_0", 00:12:44.398 "polls": 3737528, 00:12:44.398 "idle_polls": 3737257, 00:12:44.398 "completions": 309, 00:12:44.398 "requests": 154, 00:12:44.398 "request_latency": 31140886, 00:12:44.398 "pending_free_request": 0, 00:12:44.398 "pending_rdma_read": 0, 00:12:44.398 "pending_rdma_write": 0, 00:12:44.398 "pending_rdma_send": 0, 00:12:44.398 "total_send_wrs": 268, 00:12:44.398 "send_doorbell_updates": 135, 00:12:44.398 "total_recv_wrs": 4250, 00:12:44.398 "recv_doorbell_updates": 135 00:12:44.398 }, 00:12:44.398 { 00:12:44.398 "name": "mlx5_1", 00:12:44.398 "polls": 3737528, 00:12:44.398 "idle_polls": 3737528, 00:12:44.398 "completions": 0, 00:12:44.398 "requests": 0, 00:12:44.398 "request_latency": 0, 00:12:44.398 "pending_free_request": 0, 00:12:44.398 "pending_rdma_read": 0, 00:12:44.398 "pending_rdma_write": 0, 00:12:44.398 "pending_rdma_send": 0, 00:12:44.398 "total_send_wrs": 0, 00:12:44.398 "send_doorbell_updates": 0, 00:12:44.398 "total_recv_wrs": 4096, 00:12:44.398 "recv_doorbell_updates": 1 00:12:44.398 } 00:12:44.398 ] 00:12:44.398 } 00:12:44.398 ] 00:12:44.398 }, 00:12:44.398 { 00:12:44.398 "name": "nvmf_tgt_poll_group_003", 00:12:44.398 "admin_qpairs": 2, 00:12:44.398 "io_qpairs": 26, 00:12:44.398 "current_admin_qpairs": 0, 00:12:44.398 "current_io_qpairs": 0, 00:12:44.398 "pending_bdev_io": 0, 00:12:44.398 "completed_nvme_io": 75, 00:12:44.398 "transports": [ 00:12:44.398 { 00:12:44.398 "trtype": "RDMA", 00:12:44.398 "pending_data_buffer": 0, 00:12:44.398 "devices": [ 00:12:44.398 { 00:12:44.398 "name": "mlx5_0", 00:12:44.398 "polls": 2976569, 00:12:44.398 "idle_polls": 2976332, 00:12:44.398 "completions": 262, 00:12:44.398 "requests": 131, 00:12:44.398 "request_latency": 22670004, 00:12:44.398 "pending_free_request": 0, 00:12:44.398 "pending_rdma_read": 0, 00:12:44.398 "pending_rdma_write": 0, 00:12:44.398 "pending_rdma_send": 0, 00:12:44.398 "total_send_wrs": 206, 00:12:44.398 "send_doorbell_updates": 119, 00:12:44.398 "total_recv_wrs": 4227, 00:12:44.398 "recv_doorbell_updates": 120 00:12:44.398 }, 00:12:44.398 { 00:12:44.398 "name": "mlx5_1", 00:12:44.398 "polls": 2976569, 00:12:44.398 "idle_polls": 2976569, 00:12:44.398 "completions": 0, 00:12:44.398 "requests": 0, 00:12:44.398 "request_latency": 0, 00:12:44.398 "pending_free_request": 0, 00:12:44.398 "pending_rdma_read": 0, 00:12:44.398 "pending_rdma_write": 0, 00:12:44.398 "pending_rdma_send": 0, 00:12:44.398 "total_send_wrs": 0, 00:12:44.398 "send_doorbell_updates": 0, 00:12:44.398 "total_recv_wrs": 4096, 00:12:44.398 "recv_doorbell_updates": 1 00:12:44.398 } 00:12:44.398 ] 00:12:44.398 } 00:12:44.398 ] 00:12:44.398 } 00:12:44.398 ] 00:12:44.398 }' 00:12:44.398 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:44.398 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:44.398 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:44.398 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:44.398 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:44.398 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:44.398 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:44.398 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:44.398 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:44.398 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:12:44.398 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:12:44.398 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:12:44.398 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:12:44.398 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:12:44.398 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:44.657 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # (( 1306 > 0 )) 00:12:44.657 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:12:44.657 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:12:44.657 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:12:44.657 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:44.657 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # (( 124294284 > 0 )) 00:12:44.657 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:44.657 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:44.657 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:44.657 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:44.657 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:44.657 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:44.657 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:44.657 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:44.657 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:44.657 rmmod nvme_rdma 00:12:44.657 rmmod nvme_fabrics 00:12:44.657 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:44.657 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:44.657 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:44.657 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@515 -- # '[' -n 3372743 ']' 00:12:44.657 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # killprocess 3372743 00:12:44.657 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 3372743 ']' 00:12:44.657 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 3372743 00:12:44.657 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:12:44.657 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:44.657 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3372743 00:12:44.657 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:44.657 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:44.657 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3372743' 00:12:44.657 killing process with pid 3372743 00:12:44.657 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 3372743 00:12:44.657 18:17:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 3372743 00:12:44.916 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:44.916 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:12:45.176 00:12:45.176 real 0m39.458s 00:12:45.176 user 2m10.156s 00:12:45.176 sys 0m8.377s 00:12:45.176 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:45.176 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.176 ************************************ 00:12:45.176 END TEST nvmf_rpc 00:12:45.176 ************************************ 00:12:45.176 18:17:58 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:12:45.176 18:17:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:45.176 18:17:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:45.176 18:17:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:45.176 ************************************ 00:12:45.176 START TEST nvmf_invalid 00:12:45.176 ************************************ 00:12:45.176 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:12:45.176 * Looking for test storage... 00:12:45.176 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:45.176 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:45.176 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lcov --version 00:12:45.176 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:45.436 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:45.436 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:45.436 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:45.436 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:45.436 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:45.436 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:45.436 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:45.436 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:45.436 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:45.436 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:45.436 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:45.436 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:45.436 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:45.436 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:45.436 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:45.436 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:45.436 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:45.436 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:45.436 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:45.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.437 --rc genhtml_branch_coverage=1 00:12:45.437 --rc genhtml_function_coverage=1 00:12:45.437 --rc genhtml_legend=1 00:12:45.437 --rc geninfo_all_blocks=1 00:12:45.437 --rc geninfo_unexecuted_blocks=1 00:12:45.437 00:12:45.437 ' 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:45.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.437 --rc genhtml_branch_coverage=1 00:12:45.437 --rc genhtml_function_coverage=1 00:12:45.437 --rc genhtml_legend=1 00:12:45.437 --rc geninfo_all_blocks=1 00:12:45.437 --rc geninfo_unexecuted_blocks=1 00:12:45.437 00:12:45.437 ' 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:45.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.437 --rc genhtml_branch_coverage=1 00:12:45.437 --rc genhtml_function_coverage=1 00:12:45.437 --rc genhtml_legend=1 00:12:45.437 --rc geninfo_all_blocks=1 00:12:45.437 --rc geninfo_unexecuted_blocks=1 00:12:45.437 00:12:45.437 ' 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:45.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.437 --rc genhtml_branch_coverage=1 00:12:45.437 --rc genhtml_function_coverage=1 00:12:45.437 --rc genhtml_legend=1 00:12:45.437 --rc geninfo_all_blocks=1 00:12:45.437 --rc geninfo_unexecuted_blocks=1 00:12:45.437 00:12:45.437 ' 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:45.437 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:45.437 18:17:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:12:52.008 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:52.008 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:12:52.009 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:12:52.009 Found net devices under 0000:18:00.0: mlx_0_0 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:12:52.009 Found net devices under 0000:18:00.1: mlx_0_1 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # is_hw=yes 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # rdma_device_init 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # uname 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@528 -- # allocate_nic_ips 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:52.009 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:52.269 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:52.269 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:12:52.269 altname enp24s0f0np0 00:12:52.269 altname ens785f0np0 00:12:52.269 inet 192.168.100.8/24 scope global mlx_0_0 00:12:52.269 valid_lft forever preferred_lft forever 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:52.269 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:52.269 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:12:52.269 altname enp24s0f1np1 00:12:52.269 altname ens785f1np1 00:12:52.269 inet 192.168.100.9/24 scope global mlx_0_1 00:12:52.269 valid_lft forever preferred_lft forever 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # return 0 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:12:52.269 192.168.100.9' 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:12:52.269 192.168.100.9' 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # head -n 1 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:12:52.269 192.168.100.9' 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # tail -n +2 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # head -n 1 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:52.269 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:52.270 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:52.270 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:52.270 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # nvmfpid=3380486 00:12:52.270 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:52.270 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # waitforlisten 3380486 00:12:52.270 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 3380486 ']' 00:12:52.270 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.270 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:52.270 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.270 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:52.270 18:18:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:52.270 [2024-10-08 18:18:05.379691] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:12:52.270 [2024-10-08 18:18:05.379760] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.529 [2024-10-08 18:18:05.471797] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:52.529 [2024-10-08 18:18:05.562931] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.529 [2024-10-08 18:18:05.562976] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.529 [2024-10-08 18:18:05.562987] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:52.529 [2024-10-08 18:18:05.563006] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:52.529 [2024-10-08 18:18:05.563014] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.529 [2024-10-08 18:18:05.564448] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.529 [2024-10-08 18:18:05.564495] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.529 [2024-10-08 18:18:05.564593] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.529 [2024-10-08 18:18:05.564594] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:12:53.097 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:53.097 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:12:53.097 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:53.097 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:53.097 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:53.357 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:53.357 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:53.357 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode13117 00:12:53.357 [2024-10-08 18:18:06.445127] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:53.357 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:53.357 { 00:12:53.357 "nqn": "nqn.2016-06.io.spdk:cnode13117", 00:12:53.357 "tgt_name": "foobar", 00:12:53.357 "method": "nvmf_create_subsystem", 00:12:53.357 "req_id": 1 00:12:53.357 } 00:12:53.357 Got JSON-RPC error response 00:12:53.357 response: 00:12:53.357 { 00:12:53.357 "code": -32603, 00:12:53.357 "message": "Unable to find target foobar" 00:12:53.357 }' 00:12:53.357 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:53.357 { 00:12:53.357 "nqn": "nqn.2016-06.io.spdk:cnode13117", 00:12:53.357 "tgt_name": "foobar", 00:12:53.357 "method": "nvmf_create_subsystem", 00:12:53.357 "req_id": 1 00:12:53.357 } 00:12:53.357 Got JSON-RPC error response 00:12:53.357 response: 00:12:53.357 { 00:12:53.357 "code": -32603, 00:12:53.357 "message": "Unable to find target foobar" 00:12:53.357 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:53.357 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:53.357 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode15819 00:12:53.616 [2024-10-08 18:18:06.653909] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15819: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:53.616 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:53.616 { 00:12:53.616 "nqn": "nqn.2016-06.io.spdk:cnode15819", 00:12:53.616 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:53.616 "method": "nvmf_create_subsystem", 00:12:53.616 "req_id": 1 00:12:53.616 } 00:12:53.616 Got JSON-RPC error response 00:12:53.616 response: 00:12:53.616 { 00:12:53.616 "code": -32602, 00:12:53.616 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:53.616 }' 00:12:53.616 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:53.616 { 00:12:53.616 "nqn": "nqn.2016-06.io.spdk:cnode15819", 00:12:53.616 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:53.616 "method": "nvmf_create_subsystem", 00:12:53.616 "req_id": 1 00:12:53.616 } 00:12:53.616 Got JSON-RPC error response 00:12:53.616 response: 00:12:53.616 { 00:12:53.616 "code": -32602, 00:12:53.616 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:53.616 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:53.616 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:53.616 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode4447 00:12:53.876 [2024-10-08 18:18:06.862570] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4447: invalid model number 'SPDK_Controller' 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:53.876 { 00:12:53.876 "nqn": "nqn.2016-06.io.spdk:cnode4447", 00:12:53.876 "model_number": "SPDK_Controller\u001f", 00:12:53.876 "method": "nvmf_create_subsystem", 00:12:53.876 "req_id": 1 00:12:53.876 } 00:12:53.876 Got JSON-RPC error response 00:12:53.876 response: 00:12:53.876 { 00:12:53.876 "code": -32602, 00:12:53.876 "message": "Invalid MN SPDK_Controller\u001f" 00:12:53.876 }' 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:53.876 { 00:12:53.876 "nqn": "nqn.2016-06.io.spdk:cnode4447", 00:12:53.876 "model_number": "SPDK_Controller\u001f", 00:12:53.876 "method": "nvmf_create_subsystem", 00:12:53.876 "req_id": 1 00:12:53.876 } 00:12:53.876 Got JSON-RPC error response 00:12:53.876 response: 00:12:53.876 { 00:12:53.876 "code": -32602, 00:12:53.876 "message": "Invalid MN SPDK_Controller\u001f" 00:12:53.876 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:53.876 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:53.877 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.877 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.877 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:53.877 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:53.877 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:53.877 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.877 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.877 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:12:53.877 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:53.877 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:12:53.877 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.877 18:18:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.877 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:53.877 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:53.877 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:53.877 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.877 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.877 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:53.877 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:53.877 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:53.877 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.877 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.877 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:53.877 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:53.877 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:53.877 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.877 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.877 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:53.877 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:53.877 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:53.877 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.877 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.877 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:53.877 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:53.877 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:53.877 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.877 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.877 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:12:53.877 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:53.877 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:12:53.877 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.877 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.138 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:54.138 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:54.138 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:54.138 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.138 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.138 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:54.138 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:54.138 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:54.138 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.138 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.138 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 6 == \- ]] 00:12:54.138 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '6aW>+!;[GntkD]JptGM,F' 00:12:54.138 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '6aW>+!;[GntkD]JptGM,F' nqn.2016-06.io.spdk:cnode16997 00:12:54.138 [2024-10-08 18:18:07.251889] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16997: invalid serial number '6aW>+!;[GntkD]JptGM,F' 00:12:54.138 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:54.138 { 00:12:54.138 "nqn": "nqn.2016-06.io.spdk:cnode16997", 00:12:54.138 "serial_number": "6aW>+!;[GntkD]JptGM,F", 00:12:54.138 "method": "nvmf_create_subsystem", 00:12:54.138 "req_id": 1 00:12:54.138 } 00:12:54.138 Got JSON-RPC error response 00:12:54.138 response: 00:12:54.138 { 00:12:54.138 "code": -32602, 00:12:54.138 "message": "Invalid SN 6aW>+!;[GntkD]JptGM,F" 00:12:54.138 }' 00:12:54.138 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:54.138 { 00:12:54.138 "nqn": "nqn.2016-06.io.spdk:cnode16997", 00:12:54.138 "serial_number": "6aW>+!;[GntkD]JptGM,F", 00:12:54.138 "method": "nvmf_create_subsystem", 00:12:54.138 "req_id": 1 00:12:54.138 } 00:12:54.138 Got JSON-RPC error response 00:12:54.138 response: 00:12:54.138 { 00:12:54.138 "code": -32602, 00:12:54.138 "message": "Invalid SN 6aW>+!;[GntkD]JptGM,F" 00:12:54.138 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:54.139 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:54.139 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:54.139 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:54.139 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:54.139 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:54.139 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:54.139 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.139 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:54.139 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:54.139 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:54.139 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.139 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.139 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:12:54.139 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:54.139 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:12:54.139 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.139 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.399 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:54.399 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:54.399 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:54.399 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.399 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.399 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:54.399 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:54.399 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:54.399 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.399 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.399 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:54.399 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:54.399 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:54.399 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.399 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.399 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:54.399 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:54.399 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:54.399 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:54.400 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.401 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.401 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:54.401 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:54.401 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:54.401 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.401 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.401 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:54.401 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:54.401 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:54.401 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.401 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.401 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:54.401 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:54.401 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:54.401 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.401 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.401 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:54.401 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:54.401 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:54.401 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.401 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.401 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:54.401 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:54.661 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:54.661 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.661 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.661 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:54.661 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:54.661 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:54.661 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.661 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.661 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:54.661 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:54.661 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:54.661 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.661 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.661 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:12:54.661 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:54.661 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:12:54.661 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.661 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.661 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:54.661 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:54.661 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:54.661 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.661 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.661 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:12:54.661 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:54.661 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:12:54.661 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.661 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.661 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 2 == \- ]] 00:12:54.661 18:18:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '2^JPm2AmfDmIEY0ol8a{2q\jmxc^IU ver2_l ? ver1_l : ver2_l) )) 00:12:57.304 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:57.304 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:57.304 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:57.304 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:57.304 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:57.304 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:57.304 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:57.304 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:57.304 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:57.304 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:57.304 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:57.304 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:57.304 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:57.304 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:57.304 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:57.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.304 --rc genhtml_branch_coverage=1 00:12:57.304 --rc genhtml_function_coverage=1 00:12:57.304 --rc genhtml_legend=1 00:12:57.304 --rc geninfo_all_blocks=1 00:12:57.304 --rc geninfo_unexecuted_blocks=1 00:12:57.304 00:12:57.304 ' 00:12:57.304 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:57.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.304 --rc genhtml_branch_coverage=1 00:12:57.304 --rc genhtml_function_coverage=1 00:12:57.304 --rc genhtml_legend=1 00:12:57.304 --rc geninfo_all_blocks=1 00:12:57.304 --rc geninfo_unexecuted_blocks=1 00:12:57.304 00:12:57.304 ' 00:12:57.304 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:57.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.304 --rc genhtml_branch_coverage=1 00:12:57.304 --rc genhtml_function_coverage=1 00:12:57.304 --rc genhtml_legend=1 00:12:57.304 --rc geninfo_all_blocks=1 00:12:57.304 --rc geninfo_unexecuted_blocks=1 00:12:57.304 00:12:57.304 ' 00:12:57.304 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:57.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.304 --rc genhtml_branch_coverage=1 00:12:57.304 --rc genhtml_function_coverage=1 00:12:57.304 --rc genhtml_legend=1 00:12:57.304 --rc geninfo_all_blocks=1 00:12:57.304 --rc geninfo_unexecuted_blocks=1 00:12:57.304 00:12:57.304 ' 00:12:57.304 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:57.304 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:57.304 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:57.304 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:57.304 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:57.304 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:57.304 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:57.304 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:57.304 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:57.304 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:57.304 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:57.304 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:57.564 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:12:57.564 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:12:57.564 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:57.564 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:57.564 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:57.564 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:57.564 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:57.564 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:57.564 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:57.564 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:57.564 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:57.564 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.564 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.564 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.564 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:57.564 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.564 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:57.564 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:57.564 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:57.564 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:57.564 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:57.564 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:57.564 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:57.564 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:57.564 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:57.564 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:57.564 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:57.564 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:57.564 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:12:57.564 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:57.564 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:57.564 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:57.564 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:57.564 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.564 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:57.564 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.564 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:57.564 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:57.564 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:57.564 18:18:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:13:04.138 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:13:04.138 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:13:04.138 Found net devices under 0000:18:00.0: mlx_0_0 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:13:04.138 Found net devices under 0000:18:00.1: mlx_0_1 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # rdma_device_init 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # uname 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@528 -- # allocate_nic_ips 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:04.138 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:13:04.139 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:04.139 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:13:04.139 altname enp24s0f0np0 00:13:04.139 altname ens785f0np0 00:13:04.139 inet 192.168.100.8/24 scope global mlx_0_0 00:13:04.139 valid_lft forever preferred_lft forever 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:13:04.139 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:04.139 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:13:04.139 altname enp24s0f1np1 00:13:04.139 altname ens785f1np1 00:13:04.139 inet 192.168.100.9/24 scope global mlx_0_1 00:13:04.139 valid_lft forever preferred_lft forever 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # return 0 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:04.139 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:13:04.399 192.168.100.9' 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:13:04.399 192.168.100.9' 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # head -n 1 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:13:04.399 192.168.100.9' 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # tail -n +2 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # head -n 1 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # nvmfpid=3384340 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # waitforlisten 3384340 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 3384340 ']' 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:04.399 18:18:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.399 [2024-10-08 18:18:17.467551] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:13:04.399 [2024-10-08 18:18:17.467613] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:04.399 [2024-10-08 18:18:17.553676] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:04.659 [2024-10-08 18:18:17.640338] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:04.659 [2024-10-08 18:18:17.640377] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:04.659 [2024-10-08 18:18:17.640387] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:04.659 [2024-10-08 18:18:17.640396] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:04.659 [2024-10-08 18:18:17.640403] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:04.659 [2024-10-08 18:18:17.641207] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:04.659 [2024-10-08 18:18:17.641308] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:04.659 [2024-10-08 18:18:17.641309] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:13:05.228 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:05.228 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:13:05.228 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:05.228 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:05.228 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.228 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:05.228 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:05.228 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.228 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.488 [2024-10-08 18:18:18.416041] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x23fcab0/0x2400fa0) succeed. 00:13:05.488 [2024-10-08 18:18:18.426446] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x23fe050/0x2442640) succeed. 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.488 [2024-10-08 18:18:18.535750] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.488 NULL1 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3384482 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:05.488 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:05.747 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3384482 00:13:05.747 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.747 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.747 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.007 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.007 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3384482 00:13:06.007 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.007 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.007 18:18:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.266 18:18:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.266 18:18:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3384482 00:13:06.267 18:18:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.267 18:18:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.267 18:18:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.526 18:18:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.526 18:18:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3384482 00:13:06.526 18:18:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.526 18:18:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.526 18:18:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.096 18:18:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.096 18:18:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3384482 00:13:07.096 18:18:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.096 18:18:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.096 18:18:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.355 18:18:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.355 18:18:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3384482 00:13:07.355 18:18:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.355 18:18:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.355 18:18:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.615 18:18:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.615 18:18:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3384482 00:13:07.615 18:18:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.615 18:18:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.615 18:18:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.875 18:18:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.875 18:18:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3384482 00:13:07.875 18:18:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.875 18:18:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.875 18:18:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.134 18:18:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.134 18:18:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3384482 00:13:08.134 18:18:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.135 18:18:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.135 18:18:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.703 18:18:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.703 18:18:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3384482 00:13:08.703 18:18:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.703 18:18:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.703 18:18:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.963 18:18:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.963 18:18:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3384482 00:13:08.963 18:18:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.963 18:18:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.963 18:18:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.222 18:18:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.222 18:18:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3384482 00:13:09.222 18:18:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.222 18:18:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.222 18:18:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.482 18:18:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.482 18:18:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3384482 00:13:09.482 18:18:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.482 18:18:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.482 18:18:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.741 18:18:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.741 18:18:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3384482 00:13:09.741 18:18:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.741 18:18:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.741 18:18:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.310 18:18:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.310 18:18:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3384482 00:13:10.310 18:18:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.310 18:18:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.310 18:18:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.569 18:18:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.569 18:18:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3384482 00:13:10.569 18:18:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.569 18:18:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.569 18:18:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.828 18:18:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.828 18:18:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3384482 00:13:10.828 18:18:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.828 18:18:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.828 18:18:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.087 18:18:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.087 18:18:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3384482 00:13:11.087 18:18:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.087 18:18:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.087 18:18:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.656 18:18:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.656 18:18:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3384482 00:13:11.656 18:18:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.656 18:18:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.656 18:18:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.915 18:18:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.915 18:18:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3384482 00:13:11.915 18:18:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.915 18:18:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.915 18:18:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.175 18:18:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.175 18:18:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3384482 00:13:12.175 18:18:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.175 18:18:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.175 18:18:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.434 18:18:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.434 18:18:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3384482 00:13:12.434 18:18:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.434 18:18:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.434 18:18:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.693 18:18:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.693 18:18:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3384482 00:13:12.693 18:18:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.693 18:18:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.693 18:18:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.261 18:18:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.261 18:18:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3384482 00:13:13.261 18:18:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.261 18:18:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.261 18:18:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.520 18:18:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.520 18:18:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3384482 00:13:13.520 18:18:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.520 18:18:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.520 18:18:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.781 18:18:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.781 18:18:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3384482 00:13:13.781 18:18:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.781 18:18:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.781 18:18:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.040 18:18:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.040 18:18:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3384482 00:13:14.040 18:18:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.040 18:18:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.040 18:18:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.694 18:18:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.694 18:18:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3384482 00:13:14.694 18:18:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.694 18:18:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.694 18:18:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.694 18:18:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.694 18:18:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3384482 00:13:14.694 18:18:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.694 18:18:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.694 18:18:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.262 18:18:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.262 18:18:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3384482 00:13:15.262 18:18:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.262 18:18:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.262 18:18:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.521 18:18:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.521 18:18:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3384482 00:13:15.521 18:18:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.521 18:18:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.521 18:18:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.521 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:13:15.781 18:18:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.781 18:18:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3384482 00:13:15.781 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3384482) - No such process 00:13:15.781 18:18:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3384482 00:13:15.781 18:18:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:15.781 18:18:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:15.781 18:18:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:15.781 18:18:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:15.781 18:18:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:15.781 18:18:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:13:15.781 18:18:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:13:15.781 18:18:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:15.781 18:18:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:15.781 18:18:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:13:15.781 rmmod nvme_rdma 00:13:15.781 rmmod nvme_fabrics 00:13:15.781 18:18:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:15.781 18:18:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:15.781 18:18:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:15.781 18:18:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@515 -- # '[' -n 3384340 ']' 00:13:15.781 18:18:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # killprocess 3384340 00:13:15.781 18:18:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 3384340 ']' 00:13:15.781 18:18:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 3384340 00:13:15.781 18:18:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:13:15.781 18:18:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:15.781 18:18:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3384340 00:13:15.781 18:18:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:15.781 18:18:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:15.781 18:18:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3384340' 00:13:15.781 killing process with pid 3384340 00:13:15.781 18:18:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 3384340 00:13:15.781 18:18:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 3384340 00:13:16.040 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:16.040 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:13:16.300 00:13:16.300 real 0m18.950s 00:13:16.300 user 0m42.386s 00:13:16.300 sys 0m7.828s 00:13:16.300 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:16.300 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.300 ************************************ 00:13:16.300 END TEST nvmf_connect_stress 00:13:16.300 ************************************ 00:13:16.300 18:18:29 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:13:16.300 18:18:29 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:16.300 18:18:29 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:16.300 18:18:29 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:16.300 ************************************ 00:13:16.300 START TEST nvmf_fused_ordering 00:13:16.300 ************************************ 00:13:16.300 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:13:16.300 * Looking for test storage... 00:13:16.300 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:16.300 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:16.300 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lcov --version 00:13:16.300 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:16.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.560 --rc genhtml_branch_coverage=1 00:13:16.560 --rc genhtml_function_coverage=1 00:13:16.560 --rc genhtml_legend=1 00:13:16.560 --rc geninfo_all_blocks=1 00:13:16.560 --rc geninfo_unexecuted_blocks=1 00:13:16.560 00:13:16.560 ' 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:16.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.560 --rc genhtml_branch_coverage=1 00:13:16.560 --rc genhtml_function_coverage=1 00:13:16.560 --rc genhtml_legend=1 00:13:16.560 --rc geninfo_all_blocks=1 00:13:16.560 --rc geninfo_unexecuted_blocks=1 00:13:16.560 00:13:16.560 ' 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:16.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.560 --rc genhtml_branch_coverage=1 00:13:16.560 --rc genhtml_function_coverage=1 00:13:16.560 --rc genhtml_legend=1 00:13:16.560 --rc geninfo_all_blocks=1 00:13:16.560 --rc geninfo_unexecuted_blocks=1 00:13:16.560 00:13:16.560 ' 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:16.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.560 --rc genhtml_branch_coverage=1 00:13:16.560 --rc genhtml_function_coverage=1 00:13:16.560 --rc genhtml_legend=1 00:13:16.560 --rc geninfo_all_blocks=1 00:13:16.560 --rc geninfo_unexecuted_blocks=1 00:13:16.560 00:13:16.560 ' 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:16.560 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:16.561 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:16.561 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:16.561 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:16.561 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:16.561 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:16.561 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.561 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.561 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.561 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:16.561 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.561 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:16.561 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:16.561 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:16.561 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:16.561 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:16.561 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:16.561 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:16.561 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:16.561 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:16.561 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:16.561 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:16.561 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:16.561 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:13:16.561 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:16.561 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:16.561 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:16.561 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:16.561 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.561 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:16.561 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:16.561 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:16.561 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:16.561 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:16.561 18:18:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:23.137 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:23.137 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:23.137 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:23.137 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:23.137 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:23.137 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:23.137 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:23.137 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:23.137 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:23.137 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:23.137 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:23.137 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:23.137 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:23.137 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:23.137 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:23.137 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:23.137 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:23.137 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:23.137 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:23.137 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:23.137 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:23.137 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:23.137 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:23.137 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:23.137 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:23.137 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:23.137 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:23.137 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:23.137 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:13:23.137 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:13:23.137 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:13:23.137 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:13:23.137 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:13:23.138 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:13:23.138 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:13:23.138 Found net devices under 0000:18:00.0: mlx_0_0 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:13:23.138 Found net devices under 0000:18:00.1: mlx_0_1 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # is_hw=yes 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # rdma_device_init 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # uname 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe ib_cm 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe ib_core 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe ib_umad 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@70 -- # modprobe iw_cm 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@528 -- # allocate_nic_ips 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # get_rdma_if_list 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:13:23.138 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:23.138 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:13:23.138 altname enp24s0f0np0 00:13:23.138 altname ens785f0np0 00:13:23.138 inet 192.168.100.8/24 scope global mlx_0_0 00:13:23.138 valid_lft forever preferred_lft forever 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:13:23.138 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:23.138 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:13:23.138 altname enp24s0f1np1 00:13:23.138 altname ens785f1np1 00:13:23.138 inet 192.168.100.9/24 scope global mlx_0_1 00:13:23.138 valid_lft forever preferred_lft forever 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # return 0 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # get_rdma_if_list 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:23.138 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:23.139 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:13:23.139 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:23.139 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:23.139 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:23.139 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:23.139 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:23.139 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:23.139 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:13:23.139 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:23.139 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:13:23.139 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:23.139 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:23.139 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:23.139 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:23.139 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:23.139 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:13:23.139 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:23.139 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:23.139 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:23.139 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:23.139 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:13:23.139 192.168.100.9' 00:13:23.139 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:13:23.139 192.168.100.9' 00:13:23.139 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # head -n 1 00:13:23.139 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:23.139 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:13:23.139 192.168.100.9' 00:13:23.139 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # tail -n +2 00:13:23.139 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # head -n 1 00:13:23.398 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:23.398 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:13:23.398 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:23.398 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:13:23.398 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:13:23.398 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:13:23.398 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:23.398 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:23.398 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:23.398 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:23.398 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # nvmfpid=3388754 00:13:23.398 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:23.398 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # waitforlisten 3388754 00:13:23.398 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 3388754 ']' 00:13:23.398 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.398 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:23.398 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.398 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:23.398 18:18:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:23.399 [2024-10-08 18:18:36.403155] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:13:23.399 [2024-10-08 18:18:36.403222] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:23.399 [2024-10-08 18:18:36.491579] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.658 [2024-10-08 18:18:36.583785] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:23.658 [2024-10-08 18:18:36.583822] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:23.658 [2024-10-08 18:18:36.583832] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:23.658 [2024-10-08 18:18:36.583839] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:23.658 [2024-10-08 18:18:36.583846] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:23.658 [2024-10-08 18:18:36.584327] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.227 18:18:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:24.227 18:18:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:13:24.227 18:18:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:24.227 18:18:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:24.227 18:18:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:24.227 18:18:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:24.227 18:18:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:24.227 18:18:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.227 18:18:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:24.227 [2024-10-08 18:18:37.322386] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x10953b0/0x10998a0) succeed. 00:13:24.227 [2024-10-08 18:18:37.332321] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x10968b0/0x10daf40) succeed. 00:13:24.227 18:18:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.227 18:18:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:24.227 18:18:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.227 18:18:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:24.227 18:18:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.227 18:18:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:24.227 18:18:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.227 18:18:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:24.227 [2024-10-08 18:18:37.398433] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:24.487 18:18:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.487 18:18:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:24.487 18:18:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.487 18:18:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:24.487 NULL1 00:13:24.487 18:18:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.487 18:18:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:24.487 18:18:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.487 18:18:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:24.487 18:18:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.487 18:18:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:24.487 18:18:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.487 18:18:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:24.487 18:18:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.487 18:18:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:24.487 [2024-10-08 18:18:37.456074] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:13:24.487 [2024-10-08 18:18:37.456122] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3388954 ] 00:13:24.487 Attached to nqn.2016-06.io.spdk:cnode1 00:13:24.487 Namespace ID: 1 size: 1GB 00:13:24.487 fused_ordering(0) 00:13:24.487 fused_ordering(1) 00:13:24.487 fused_ordering(2) 00:13:24.487 fused_ordering(3) 00:13:24.487 fused_ordering(4) 00:13:24.487 fused_ordering(5) 00:13:24.487 fused_ordering(6) 00:13:24.487 fused_ordering(7) 00:13:24.487 fused_ordering(8) 00:13:24.487 fused_ordering(9) 00:13:24.487 fused_ordering(10) 00:13:24.487 fused_ordering(11) 00:13:24.487 fused_ordering(12) 00:13:24.487 fused_ordering(13) 00:13:24.487 fused_ordering(14) 00:13:24.487 fused_ordering(15) 00:13:24.487 fused_ordering(16) 00:13:24.487 fused_ordering(17) 00:13:24.487 fused_ordering(18) 00:13:24.487 fused_ordering(19) 00:13:24.487 fused_ordering(20) 00:13:24.487 fused_ordering(21) 00:13:24.487 fused_ordering(22) 00:13:24.487 fused_ordering(23) 00:13:24.487 fused_ordering(24) 00:13:24.487 fused_ordering(25) 00:13:24.487 fused_ordering(26) 00:13:24.487 fused_ordering(27) 00:13:24.487 fused_ordering(28) 00:13:24.487 fused_ordering(29) 00:13:24.487 fused_ordering(30) 00:13:24.488 fused_ordering(31) 00:13:24.488 fused_ordering(32) 00:13:24.488 fused_ordering(33) 00:13:24.488 fused_ordering(34) 00:13:24.488 fused_ordering(35) 00:13:24.488 fused_ordering(36) 00:13:24.488 fused_ordering(37) 00:13:24.488 fused_ordering(38) 00:13:24.488 fused_ordering(39) 00:13:24.488 fused_ordering(40) 00:13:24.488 fused_ordering(41) 00:13:24.488 fused_ordering(42) 00:13:24.488 fused_ordering(43) 00:13:24.488 fused_ordering(44) 00:13:24.488 fused_ordering(45) 00:13:24.488 fused_ordering(46) 00:13:24.488 fused_ordering(47) 00:13:24.488 fused_ordering(48) 00:13:24.488 fused_ordering(49) 00:13:24.488 fused_ordering(50) 00:13:24.488 fused_ordering(51) 00:13:24.488 fused_ordering(52) 00:13:24.488 fused_ordering(53) 00:13:24.488 fused_ordering(54) 00:13:24.488 fused_ordering(55) 00:13:24.488 fused_ordering(56) 00:13:24.488 fused_ordering(57) 00:13:24.488 fused_ordering(58) 00:13:24.488 fused_ordering(59) 00:13:24.488 fused_ordering(60) 00:13:24.488 fused_ordering(61) 00:13:24.488 fused_ordering(62) 00:13:24.488 fused_ordering(63) 00:13:24.488 fused_ordering(64) 00:13:24.488 fused_ordering(65) 00:13:24.488 fused_ordering(66) 00:13:24.488 fused_ordering(67) 00:13:24.488 fused_ordering(68) 00:13:24.488 fused_ordering(69) 00:13:24.488 fused_ordering(70) 00:13:24.488 fused_ordering(71) 00:13:24.488 fused_ordering(72) 00:13:24.488 fused_ordering(73) 00:13:24.488 fused_ordering(74) 00:13:24.488 fused_ordering(75) 00:13:24.488 fused_ordering(76) 00:13:24.488 fused_ordering(77) 00:13:24.488 fused_ordering(78) 00:13:24.488 fused_ordering(79) 00:13:24.488 fused_ordering(80) 00:13:24.488 fused_ordering(81) 00:13:24.488 fused_ordering(82) 00:13:24.488 fused_ordering(83) 00:13:24.488 fused_ordering(84) 00:13:24.488 fused_ordering(85) 00:13:24.488 fused_ordering(86) 00:13:24.488 fused_ordering(87) 00:13:24.488 fused_ordering(88) 00:13:24.488 fused_ordering(89) 00:13:24.488 fused_ordering(90) 00:13:24.488 fused_ordering(91) 00:13:24.488 fused_ordering(92) 00:13:24.488 fused_ordering(93) 00:13:24.488 fused_ordering(94) 00:13:24.488 fused_ordering(95) 00:13:24.488 fused_ordering(96) 00:13:24.488 fused_ordering(97) 00:13:24.488 fused_ordering(98) 00:13:24.488 fused_ordering(99) 00:13:24.488 fused_ordering(100) 00:13:24.488 fused_ordering(101) 00:13:24.488 fused_ordering(102) 00:13:24.488 fused_ordering(103) 00:13:24.488 fused_ordering(104) 00:13:24.488 fused_ordering(105) 00:13:24.488 fused_ordering(106) 00:13:24.488 fused_ordering(107) 00:13:24.488 fused_ordering(108) 00:13:24.488 fused_ordering(109) 00:13:24.488 fused_ordering(110) 00:13:24.488 fused_ordering(111) 00:13:24.488 fused_ordering(112) 00:13:24.488 fused_ordering(113) 00:13:24.488 fused_ordering(114) 00:13:24.488 fused_ordering(115) 00:13:24.488 fused_ordering(116) 00:13:24.488 fused_ordering(117) 00:13:24.488 fused_ordering(118) 00:13:24.488 fused_ordering(119) 00:13:24.488 fused_ordering(120) 00:13:24.488 fused_ordering(121) 00:13:24.488 fused_ordering(122) 00:13:24.488 fused_ordering(123) 00:13:24.488 fused_ordering(124) 00:13:24.488 fused_ordering(125) 00:13:24.488 fused_ordering(126) 00:13:24.488 fused_ordering(127) 00:13:24.488 fused_ordering(128) 00:13:24.488 fused_ordering(129) 00:13:24.488 fused_ordering(130) 00:13:24.488 fused_ordering(131) 00:13:24.488 fused_ordering(132) 00:13:24.488 fused_ordering(133) 00:13:24.488 fused_ordering(134) 00:13:24.488 fused_ordering(135) 00:13:24.488 fused_ordering(136) 00:13:24.488 fused_ordering(137) 00:13:24.488 fused_ordering(138) 00:13:24.488 fused_ordering(139) 00:13:24.488 fused_ordering(140) 00:13:24.488 fused_ordering(141) 00:13:24.488 fused_ordering(142) 00:13:24.488 fused_ordering(143) 00:13:24.488 fused_ordering(144) 00:13:24.488 fused_ordering(145) 00:13:24.488 fused_ordering(146) 00:13:24.488 fused_ordering(147) 00:13:24.488 fused_ordering(148) 00:13:24.488 fused_ordering(149) 00:13:24.488 fused_ordering(150) 00:13:24.488 fused_ordering(151) 00:13:24.488 fused_ordering(152) 00:13:24.488 fused_ordering(153) 00:13:24.488 fused_ordering(154) 00:13:24.488 fused_ordering(155) 00:13:24.488 fused_ordering(156) 00:13:24.488 fused_ordering(157) 00:13:24.488 fused_ordering(158) 00:13:24.488 fused_ordering(159) 00:13:24.488 fused_ordering(160) 00:13:24.488 fused_ordering(161) 00:13:24.488 fused_ordering(162) 00:13:24.488 fused_ordering(163) 00:13:24.488 fused_ordering(164) 00:13:24.488 fused_ordering(165) 00:13:24.488 fused_ordering(166) 00:13:24.488 fused_ordering(167) 00:13:24.488 fused_ordering(168) 00:13:24.488 fused_ordering(169) 00:13:24.488 fused_ordering(170) 00:13:24.488 fused_ordering(171) 00:13:24.488 fused_ordering(172) 00:13:24.488 fused_ordering(173) 00:13:24.488 fused_ordering(174) 00:13:24.488 fused_ordering(175) 00:13:24.488 fused_ordering(176) 00:13:24.488 fused_ordering(177) 00:13:24.488 fused_ordering(178) 00:13:24.488 fused_ordering(179) 00:13:24.488 fused_ordering(180) 00:13:24.488 fused_ordering(181) 00:13:24.488 fused_ordering(182) 00:13:24.488 fused_ordering(183) 00:13:24.488 fused_ordering(184) 00:13:24.488 fused_ordering(185) 00:13:24.488 fused_ordering(186) 00:13:24.488 fused_ordering(187) 00:13:24.488 fused_ordering(188) 00:13:24.488 fused_ordering(189) 00:13:24.488 fused_ordering(190) 00:13:24.488 fused_ordering(191) 00:13:24.488 fused_ordering(192) 00:13:24.488 fused_ordering(193) 00:13:24.488 fused_ordering(194) 00:13:24.488 fused_ordering(195) 00:13:24.488 fused_ordering(196) 00:13:24.488 fused_ordering(197) 00:13:24.488 fused_ordering(198) 00:13:24.488 fused_ordering(199) 00:13:24.488 fused_ordering(200) 00:13:24.488 fused_ordering(201) 00:13:24.488 fused_ordering(202) 00:13:24.488 fused_ordering(203) 00:13:24.488 fused_ordering(204) 00:13:24.488 fused_ordering(205) 00:13:24.748 fused_ordering(206) 00:13:24.748 fused_ordering(207) 00:13:24.748 fused_ordering(208) 00:13:24.748 fused_ordering(209) 00:13:24.748 fused_ordering(210) 00:13:24.748 fused_ordering(211) 00:13:24.748 fused_ordering(212) 00:13:24.748 fused_ordering(213) 00:13:24.748 fused_ordering(214) 00:13:24.748 fused_ordering(215) 00:13:24.748 fused_ordering(216) 00:13:24.748 fused_ordering(217) 00:13:24.748 fused_ordering(218) 00:13:24.748 fused_ordering(219) 00:13:24.748 fused_ordering(220) 00:13:24.748 fused_ordering(221) 00:13:24.748 fused_ordering(222) 00:13:24.748 fused_ordering(223) 00:13:24.748 fused_ordering(224) 00:13:24.748 fused_ordering(225) 00:13:24.748 fused_ordering(226) 00:13:24.748 fused_ordering(227) 00:13:24.748 fused_ordering(228) 00:13:24.748 fused_ordering(229) 00:13:24.748 fused_ordering(230) 00:13:24.748 fused_ordering(231) 00:13:24.748 fused_ordering(232) 00:13:24.748 fused_ordering(233) 00:13:24.748 fused_ordering(234) 00:13:24.748 fused_ordering(235) 00:13:24.748 fused_ordering(236) 00:13:24.748 fused_ordering(237) 00:13:24.748 fused_ordering(238) 00:13:24.748 fused_ordering(239) 00:13:24.748 fused_ordering(240) 00:13:24.748 fused_ordering(241) 00:13:24.748 fused_ordering(242) 00:13:24.748 fused_ordering(243) 00:13:24.748 fused_ordering(244) 00:13:24.748 fused_ordering(245) 00:13:24.748 fused_ordering(246) 00:13:24.748 fused_ordering(247) 00:13:24.748 fused_ordering(248) 00:13:24.748 fused_ordering(249) 00:13:24.748 fused_ordering(250) 00:13:24.748 fused_ordering(251) 00:13:24.748 fused_ordering(252) 00:13:24.748 fused_ordering(253) 00:13:24.748 fused_ordering(254) 00:13:24.748 fused_ordering(255) 00:13:24.748 fused_ordering(256) 00:13:24.748 fused_ordering(257) 00:13:24.748 fused_ordering(258) 00:13:24.748 fused_ordering(259) 00:13:24.748 fused_ordering(260) 00:13:24.748 fused_ordering(261) 00:13:24.748 fused_ordering(262) 00:13:24.748 fused_ordering(263) 00:13:24.748 fused_ordering(264) 00:13:24.748 fused_ordering(265) 00:13:24.748 fused_ordering(266) 00:13:24.748 fused_ordering(267) 00:13:24.748 fused_ordering(268) 00:13:24.748 fused_ordering(269) 00:13:24.748 fused_ordering(270) 00:13:24.748 fused_ordering(271) 00:13:24.748 fused_ordering(272) 00:13:24.748 fused_ordering(273) 00:13:24.748 fused_ordering(274) 00:13:24.748 fused_ordering(275) 00:13:24.748 fused_ordering(276) 00:13:24.748 fused_ordering(277) 00:13:24.748 fused_ordering(278) 00:13:24.748 fused_ordering(279) 00:13:24.748 fused_ordering(280) 00:13:24.748 fused_ordering(281) 00:13:24.748 fused_ordering(282) 00:13:24.748 fused_ordering(283) 00:13:24.748 fused_ordering(284) 00:13:24.748 fused_ordering(285) 00:13:24.748 fused_ordering(286) 00:13:24.748 fused_ordering(287) 00:13:24.748 fused_ordering(288) 00:13:24.748 fused_ordering(289) 00:13:24.748 fused_ordering(290) 00:13:24.748 fused_ordering(291) 00:13:24.748 fused_ordering(292) 00:13:24.748 fused_ordering(293) 00:13:24.748 fused_ordering(294) 00:13:24.748 fused_ordering(295) 00:13:24.748 fused_ordering(296) 00:13:24.748 fused_ordering(297) 00:13:24.748 fused_ordering(298) 00:13:24.748 fused_ordering(299) 00:13:24.748 fused_ordering(300) 00:13:24.748 fused_ordering(301) 00:13:24.748 fused_ordering(302) 00:13:24.748 fused_ordering(303) 00:13:24.748 fused_ordering(304) 00:13:24.748 fused_ordering(305) 00:13:24.748 fused_ordering(306) 00:13:24.748 fused_ordering(307) 00:13:24.748 fused_ordering(308) 00:13:24.748 fused_ordering(309) 00:13:24.748 fused_ordering(310) 00:13:24.748 fused_ordering(311) 00:13:24.748 fused_ordering(312) 00:13:24.748 fused_ordering(313) 00:13:24.748 fused_ordering(314) 00:13:24.748 fused_ordering(315) 00:13:24.748 fused_ordering(316) 00:13:24.748 fused_ordering(317) 00:13:24.748 fused_ordering(318) 00:13:24.748 fused_ordering(319) 00:13:24.748 fused_ordering(320) 00:13:24.748 fused_ordering(321) 00:13:24.748 fused_ordering(322) 00:13:24.748 fused_ordering(323) 00:13:24.748 fused_ordering(324) 00:13:24.748 fused_ordering(325) 00:13:24.748 fused_ordering(326) 00:13:24.748 fused_ordering(327) 00:13:24.748 fused_ordering(328) 00:13:24.748 fused_ordering(329) 00:13:24.748 fused_ordering(330) 00:13:24.748 fused_ordering(331) 00:13:24.748 fused_ordering(332) 00:13:24.748 fused_ordering(333) 00:13:24.748 fused_ordering(334) 00:13:24.748 fused_ordering(335) 00:13:24.748 fused_ordering(336) 00:13:24.748 fused_ordering(337) 00:13:24.748 fused_ordering(338) 00:13:24.748 fused_ordering(339) 00:13:24.748 fused_ordering(340) 00:13:24.748 fused_ordering(341) 00:13:24.748 fused_ordering(342) 00:13:24.748 fused_ordering(343) 00:13:24.748 fused_ordering(344) 00:13:24.748 fused_ordering(345) 00:13:24.748 fused_ordering(346) 00:13:24.748 fused_ordering(347) 00:13:24.748 fused_ordering(348) 00:13:24.748 fused_ordering(349) 00:13:24.748 fused_ordering(350) 00:13:24.748 fused_ordering(351) 00:13:24.748 fused_ordering(352) 00:13:24.748 fused_ordering(353) 00:13:24.748 fused_ordering(354) 00:13:24.748 fused_ordering(355) 00:13:24.748 fused_ordering(356) 00:13:24.748 fused_ordering(357) 00:13:24.748 fused_ordering(358) 00:13:24.748 fused_ordering(359) 00:13:24.748 fused_ordering(360) 00:13:24.748 fused_ordering(361) 00:13:24.748 fused_ordering(362) 00:13:24.748 fused_ordering(363) 00:13:24.748 fused_ordering(364) 00:13:24.748 fused_ordering(365) 00:13:24.748 fused_ordering(366) 00:13:24.748 fused_ordering(367) 00:13:24.748 fused_ordering(368) 00:13:24.748 fused_ordering(369) 00:13:24.748 fused_ordering(370) 00:13:24.748 fused_ordering(371) 00:13:24.748 fused_ordering(372) 00:13:24.748 fused_ordering(373) 00:13:24.748 fused_ordering(374) 00:13:24.748 fused_ordering(375) 00:13:24.748 fused_ordering(376) 00:13:24.748 fused_ordering(377) 00:13:24.748 fused_ordering(378) 00:13:24.748 fused_ordering(379) 00:13:24.748 fused_ordering(380) 00:13:24.748 fused_ordering(381) 00:13:24.748 fused_ordering(382) 00:13:24.748 fused_ordering(383) 00:13:24.748 fused_ordering(384) 00:13:24.748 fused_ordering(385) 00:13:24.748 fused_ordering(386) 00:13:24.748 fused_ordering(387) 00:13:24.748 fused_ordering(388) 00:13:24.748 fused_ordering(389) 00:13:24.748 fused_ordering(390) 00:13:24.748 fused_ordering(391) 00:13:24.748 fused_ordering(392) 00:13:24.748 fused_ordering(393) 00:13:24.748 fused_ordering(394) 00:13:24.748 fused_ordering(395) 00:13:24.748 fused_ordering(396) 00:13:24.748 fused_ordering(397) 00:13:24.748 fused_ordering(398) 00:13:24.749 fused_ordering(399) 00:13:24.749 fused_ordering(400) 00:13:24.749 fused_ordering(401) 00:13:24.749 fused_ordering(402) 00:13:24.749 fused_ordering(403) 00:13:24.749 fused_ordering(404) 00:13:24.749 fused_ordering(405) 00:13:24.749 fused_ordering(406) 00:13:24.749 fused_ordering(407) 00:13:24.749 fused_ordering(408) 00:13:24.749 fused_ordering(409) 00:13:24.749 fused_ordering(410) 00:13:24.749 fused_ordering(411) 00:13:24.749 fused_ordering(412) 00:13:24.749 fused_ordering(413) 00:13:24.749 fused_ordering(414) 00:13:24.749 fused_ordering(415) 00:13:24.749 fused_ordering(416) 00:13:24.749 fused_ordering(417) 00:13:24.749 fused_ordering(418) 00:13:24.749 fused_ordering(419) 00:13:24.749 fused_ordering(420) 00:13:24.749 fused_ordering(421) 00:13:24.749 fused_ordering(422) 00:13:24.749 fused_ordering(423) 00:13:24.749 fused_ordering(424) 00:13:24.749 fused_ordering(425) 00:13:24.749 fused_ordering(426) 00:13:24.749 fused_ordering(427) 00:13:24.749 fused_ordering(428) 00:13:24.749 fused_ordering(429) 00:13:24.749 fused_ordering(430) 00:13:24.749 fused_ordering(431) 00:13:24.749 fused_ordering(432) 00:13:24.749 fused_ordering(433) 00:13:24.749 fused_ordering(434) 00:13:24.749 fused_ordering(435) 00:13:24.749 fused_ordering(436) 00:13:24.749 fused_ordering(437) 00:13:24.749 fused_ordering(438) 00:13:24.749 fused_ordering(439) 00:13:24.749 fused_ordering(440) 00:13:24.749 fused_ordering(441) 00:13:24.749 fused_ordering(442) 00:13:24.749 fused_ordering(443) 00:13:24.749 fused_ordering(444) 00:13:24.749 fused_ordering(445) 00:13:24.749 fused_ordering(446) 00:13:24.749 fused_ordering(447) 00:13:24.749 fused_ordering(448) 00:13:24.749 fused_ordering(449) 00:13:24.749 fused_ordering(450) 00:13:24.749 fused_ordering(451) 00:13:24.749 fused_ordering(452) 00:13:24.749 fused_ordering(453) 00:13:24.749 fused_ordering(454) 00:13:24.749 fused_ordering(455) 00:13:24.749 fused_ordering(456) 00:13:24.749 fused_ordering(457) 00:13:24.749 fused_ordering(458) 00:13:24.749 fused_ordering(459) 00:13:24.749 fused_ordering(460) 00:13:24.749 fused_ordering(461) 00:13:24.749 fused_ordering(462) 00:13:24.749 fused_ordering(463) 00:13:24.749 fused_ordering(464) 00:13:24.749 fused_ordering(465) 00:13:24.749 fused_ordering(466) 00:13:24.749 fused_ordering(467) 00:13:24.749 fused_ordering(468) 00:13:24.749 fused_ordering(469) 00:13:24.749 fused_ordering(470) 00:13:24.749 fused_ordering(471) 00:13:24.749 fused_ordering(472) 00:13:24.749 fused_ordering(473) 00:13:24.749 fused_ordering(474) 00:13:24.749 fused_ordering(475) 00:13:24.749 fused_ordering(476) 00:13:24.749 fused_ordering(477) 00:13:24.749 fused_ordering(478) 00:13:24.749 fused_ordering(479) 00:13:24.749 fused_ordering(480) 00:13:24.749 fused_ordering(481) 00:13:24.749 fused_ordering(482) 00:13:24.749 fused_ordering(483) 00:13:24.749 fused_ordering(484) 00:13:24.749 fused_ordering(485) 00:13:24.749 fused_ordering(486) 00:13:24.749 fused_ordering(487) 00:13:24.749 fused_ordering(488) 00:13:24.749 fused_ordering(489) 00:13:24.749 fused_ordering(490) 00:13:24.749 fused_ordering(491) 00:13:24.749 fused_ordering(492) 00:13:24.749 fused_ordering(493) 00:13:24.749 fused_ordering(494) 00:13:24.749 fused_ordering(495) 00:13:24.749 fused_ordering(496) 00:13:24.749 fused_ordering(497) 00:13:24.749 fused_ordering(498) 00:13:24.749 fused_ordering(499) 00:13:24.749 fused_ordering(500) 00:13:24.749 fused_ordering(501) 00:13:24.749 fused_ordering(502) 00:13:24.749 fused_ordering(503) 00:13:24.749 fused_ordering(504) 00:13:24.749 fused_ordering(505) 00:13:24.749 fused_ordering(506) 00:13:24.749 fused_ordering(507) 00:13:24.749 fused_ordering(508) 00:13:24.749 fused_ordering(509) 00:13:24.749 fused_ordering(510) 00:13:24.749 fused_ordering(511) 00:13:24.749 fused_ordering(512) 00:13:24.749 fused_ordering(513) 00:13:24.749 fused_ordering(514) 00:13:24.749 fused_ordering(515) 00:13:24.749 fused_ordering(516) 00:13:24.749 fused_ordering(517) 00:13:24.749 fused_ordering(518) 00:13:24.749 fused_ordering(519) 00:13:24.749 fused_ordering(520) 00:13:24.749 fused_ordering(521) 00:13:24.749 fused_ordering(522) 00:13:24.749 fused_ordering(523) 00:13:24.749 fused_ordering(524) 00:13:24.749 fused_ordering(525) 00:13:24.749 fused_ordering(526) 00:13:24.749 fused_ordering(527) 00:13:24.749 fused_ordering(528) 00:13:24.749 fused_ordering(529) 00:13:24.749 fused_ordering(530) 00:13:24.749 fused_ordering(531) 00:13:24.749 fused_ordering(532) 00:13:24.749 fused_ordering(533) 00:13:24.749 fused_ordering(534) 00:13:24.749 fused_ordering(535) 00:13:24.749 fused_ordering(536) 00:13:24.749 fused_ordering(537) 00:13:24.749 fused_ordering(538) 00:13:24.749 fused_ordering(539) 00:13:24.749 fused_ordering(540) 00:13:24.749 fused_ordering(541) 00:13:24.749 fused_ordering(542) 00:13:24.749 fused_ordering(543) 00:13:24.749 fused_ordering(544) 00:13:24.749 fused_ordering(545) 00:13:24.749 fused_ordering(546) 00:13:24.749 fused_ordering(547) 00:13:24.749 fused_ordering(548) 00:13:24.749 fused_ordering(549) 00:13:24.749 fused_ordering(550) 00:13:24.749 fused_ordering(551) 00:13:24.749 fused_ordering(552) 00:13:24.749 fused_ordering(553) 00:13:24.749 fused_ordering(554) 00:13:24.749 fused_ordering(555) 00:13:24.749 fused_ordering(556) 00:13:24.749 fused_ordering(557) 00:13:24.749 fused_ordering(558) 00:13:24.749 fused_ordering(559) 00:13:24.749 fused_ordering(560) 00:13:24.749 fused_ordering(561) 00:13:24.749 fused_ordering(562) 00:13:24.749 fused_ordering(563) 00:13:24.749 fused_ordering(564) 00:13:24.749 fused_ordering(565) 00:13:24.749 fused_ordering(566) 00:13:24.749 fused_ordering(567) 00:13:24.749 fused_ordering(568) 00:13:24.749 fused_ordering(569) 00:13:24.749 fused_ordering(570) 00:13:24.749 fused_ordering(571) 00:13:24.749 fused_ordering(572) 00:13:24.749 fused_ordering(573) 00:13:24.749 fused_ordering(574) 00:13:24.749 fused_ordering(575) 00:13:24.749 fused_ordering(576) 00:13:24.749 fused_ordering(577) 00:13:24.749 fused_ordering(578) 00:13:24.749 fused_ordering(579) 00:13:24.749 fused_ordering(580) 00:13:24.749 fused_ordering(581) 00:13:24.749 fused_ordering(582) 00:13:24.749 fused_ordering(583) 00:13:24.749 fused_ordering(584) 00:13:24.749 fused_ordering(585) 00:13:24.749 fused_ordering(586) 00:13:24.749 fused_ordering(587) 00:13:24.749 fused_ordering(588) 00:13:24.749 fused_ordering(589) 00:13:24.749 fused_ordering(590) 00:13:24.749 fused_ordering(591) 00:13:24.749 fused_ordering(592) 00:13:24.749 fused_ordering(593) 00:13:24.749 fused_ordering(594) 00:13:24.749 fused_ordering(595) 00:13:24.749 fused_ordering(596) 00:13:24.749 fused_ordering(597) 00:13:24.749 fused_ordering(598) 00:13:24.749 fused_ordering(599) 00:13:24.749 fused_ordering(600) 00:13:24.749 fused_ordering(601) 00:13:24.749 fused_ordering(602) 00:13:24.749 fused_ordering(603) 00:13:24.749 fused_ordering(604) 00:13:24.749 fused_ordering(605) 00:13:24.749 fused_ordering(606) 00:13:24.749 fused_ordering(607) 00:13:24.749 fused_ordering(608) 00:13:24.749 fused_ordering(609) 00:13:24.749 fused_ordering(610) 00:13:24.749 fused_ordering(611) 00:13:24.749 fused_ordering(612) 00:13:24.749 fused_ordering(613) 00:13:24.749 fused_ordering(614) 00:13:24.749 fused_ordering(615) 00:13:25.010 fused_ordering(616) 00:13:25.010 fused_ordering(617) 00:13:25.010 fused_ordering(618) 00:13:25.010 fused_ordering(619) 00:13:25.010 fused_ordering(620) 00:13:25.010 fused_ordering(621) 00:13:25.010 fused_ordering(622) 00:13:25.010 fused_ordering(623) 00:13:25.010 fused_ordering(624) 00:13:25.010 fused_ordering(625) 00:13:25.010 fused_ordering(626) 00:13:25.010 fused_ordering(627) 00:13:25.010 fused_ordering(628) 00:13:25.010 fused_ordering(629) 00:13:25.010 fused_ordering(630) 00:13:25.010 fused_ordering(631) 00:13:25.010 fused_ordering(632) 00:13:25.010 fused_ordering(633) 00:13:25.010 fused_ordering(634) 00:13:25.010 fused_ordering(635) 00:13:25.010 fused_ordering(636) 00:13:25.010 fused_ordering(637) 00:13:25.010 fused_ordering(638) 00:13:25.010 fused_ordering(639) 00:13:25.010 fused_ordering(640) 00:13:25.010 fused_ordering(641) 00:13:25.010 fused_ordering(642) 00:13:25.010 fused_ordering(643) 00:13:25.010 fused_ordering(644) 00:13:25.010 fused_ordering(645) 00:13:25.010 fused_ordering(646) 00:13:25.010 fused_ordering(647) 00:13:25.010 fused_ordering(648) 00:13:25.010 fused_ordering(649) 00:13:25.010 fused_ordering(650) 00:13:25.010 fused_ordering(651) 00:13:25.010 fused_ordering(652) 00:13:25.010 fused_ordering(653) 00:13:25.010 fused_ordering(654) 00:13:25.010 fused_ordering(655) 00:13:25.010 fused_ordering(656) 00:13:25.010 fused_ordering(657) 00:13:25.010 fused_ordering(658) 00:13:25.010 fused_ordering(659) 00:13:25.010 fused_ordering(660) 00:13:25.010 fused_ordering(661) 00:13:25.010 fused_ordering(662) 00:13:25.010 fused_ordering(663) 00:13:25.010 fused_ordering(664) 00:13:25.010 fused_ordering(665) 00:13:25.010 fused_ordering(666) 00:13:25.010 fused_ordering(667) 00:13:25.010 fused_ordering(668) 00:13:25.010 fused_ordering(669) 00:13:25.010 fused_ordering(670) 00:13:25.010 fused_ordering(671) 00:13:25.010 fused_ordering(672) 00:13:25.010 fused_ordering(673) 00:13:25.010 fused_ordering(674) 00:13:25.010 fused_ordering(675) 00:13:25.010 fused_ordering(676) 00:13:25.010 fused_ordering(677) 00:13:25.010 fused_ordering(678) 00:13:25.010 fused_ordering(679) 00:13:25.010 fused_ordering(680) 00:13:25.010 fused_ordering(681) 00:13:25.010 fused_ordering(682) 00:13:25.010 fused_ordering(683) 00:13:25.010 fused_ordering(684) 00:13:25.010 fused_ordering(685) 00:13:25.010 fused_ordering(686) 00:13:25.010 fused_ordering(687) 00:13:25.010 fused_ordering(688) 00:13:25.010 fused_ordering(689) 00:13:25.010 fused_ordering(690) 00:13:25.010 fused_ordering(691) 00:13:25.010 fused_ordering(692) 00:13:25.010 fused_ordering(693) 00:13:25.010 fused_ordering(694) 00:13:25.010 fused_ordering(695) 00:13:25.010 fused_ordering(696) 00:13:25.010 fused_ordering(697) 00:13:25.010 fused_ordering(698) 00:13:25.010 fused_ordering(699) 00:13:25.010 fused_ordering(700) 00:13:25.010 fused_ordering(701) 00:13:25.010 fused_ordering(702) 00:13:25.010 fused_ordering(703) 00:13:25.010 fused_ordering(704) 00:13:25.010 fused_ordering(705) 00:13:25.010 fused_ordering(706) 00:13:25.010 fused_ordering(707) 00:13:25.010 fused_ordering(708) 00:13:25.010 fused_ordering(709) 00:13:25.010 fused_ordering(710) 00:13:25.010 fused_ordering(711) 00:13:25.010 fused_ordering(712) 00:13:25.010 fused_ordering(713) 00:13:25.010 fused_ordering(714) 00:13:25.010 fused_ordering(715) 00:13:25.010 fused_ordering(716) 00:13:25.010 fused_ordering(717) 00:13:25.010 fused_ordering(718) 00:13:25.010 fused_ordering(719) 00:13:25.010 fused_ordering(720) 00:13:25.010 fused_ordering(721) 00:13:25.010 fused_ordering(722) 00:13:25.010 fused_ordering(723) 00:13:25.010 fused_ordering(724) 00:13:25.010 fused_ordering(725) 00:13:25.010 fused_ordering(726) 00:13:25.010 fused_ordering(727) 00:13:25.010 fused_ordering(728) 00:13:25.010 fused_ordering(729) 00:13:25.010 fused_ordering(730) 00:13:25.010 fused_ordering(731) 00:13:25.010 fused_ordering(732) 00:13:25.010 fused_ordering(733) 00:13:25.010 fused_ordering(734) 00:13:25.010 fused_ordering(735) 00:13:25.010 fused_ordering(736) 00:13:25.010 fused_ordering(737) 00:13:25.010 fused_ordering(738) 00:13:25.010 fused_ordering(739) 00:13:25.010 fused_ordering(740) 00:13:25.010 fused_ordering(741) 00:13:25.010 fused_ordering(742) 00:13:25.010 fused_ordering(743) 00:13:25.010 fused_ordering(744) 00:13:25.010 fused_ordering(745) 00:13:25.010 fused_ordering(746) 00:13:25.010 fused_ordering(747) 00:13:25.010 fused_ordering(748) 00:13:25.010 fused_ordering(749) 00:13:25.010 fused_ordering(750) 00:13:25.010 fused_ordering(751) 00:13:25.010 fused_ordering(752) 00:13:25.010 fused_ordering(753) 00:13:25.010 fused_ordering(754) 00:13:25.010 fused_ordering(755) 00:13:25.010 fused_ordering(756) 00:13:25.010 fused_ordering(757) 00:13:25.010 fused_ordering(758) 00:13:25.010 fused_ordering(759) 00:13:25.010 fused_ordering(760) 00:13:25.010 fused_ordering(761) 00:13:25.010 fused_ordering(762) 00:13:25.010 fused_ordering(763) 00:13:25.010 fused_ordering(764) 00:13:25.010 fused_ordering(765) 00:13:25.010 fused_ordering(766) 00:13:25.010 fused_ordering(767) 00:13:25.010 fused_ordering(768) 00:13:25.010 fused_ordering(769) 00:13:25.010 fused_ordering(770) 00:13:25.010 fused_ordering(771) 00:13:25.010 fused_ordering(772) 00:13:25.010 fused_ordering(773) 00:13:25.010 fused_ordering(774) 00:13:25.010 fused_ordering(775) 00:13:25.010 fused_ordering(776) 00:13:25.010 fused_ordering(777) 00:13:25.010 fused_ordering(778) 00:13:25.010 fused_ordering(779) 00:13:25.010 fused_ordering(780) 00:13:25.010 fused_ordering(781) 00:13:25.010 fused_ordering(782) 00:13:25.010 fused_ordering(783) 00:13:25.010 fused_ordering(784) 00:13:25.010 fused_ordering(785) 00:13:25.010 fused_ordering(786) 00:13:25.010 fused_ordering(787) 00:13:25.010 fused_ordering(788) 00:13:25.010 fused_ordering(789) 00:13:25.010 fused_ordering(790) 00:13:25.010 fused_ordering(791) 00:13:25.010 fused_ordering(792) 00:13:25.010 fused_ordering(793) 00:13:25.010 fused_ordering(794) 00:13:25.010 fused_ordering(795) 00:13:25.010 fused_ordering(796) 00:13:25.010 fused_ordering(797) 00:13:25.010 fused_ordering(798) 00:13:25.010 fused_ordering(799) 00:13:25.010 fused_ordering(800) 00:13:25.010 fused_ordering(801) 00:13:25.010 fused_ordering(802) 00:13:25.010 fused_ordering(803) 00:13:25.010 fused_ordering(804) 00:13:25.010 fused_ordering(805) 00:13:25.010 fused_ordering(806) 00:13:25.010 fused_ordering(807) 00:13:25.010 fused_ordering(808) 00:13:25.010 fused_ordering(809) 00:13:25.010 fused_ordering(810) 00:13:25.010 fused_ordering(811) 00:13:25.010 fused_ordering(812) 00:13:25.010 fused_ordering(813) 00:13:25.010 fused_ordering(814) 00:13:25.010 fused_ordering(815) 00:13:25.010 fused_ordering(816) 00:13:25.010 fused_ordering(817) 00:13:25.010 fused_ordering(818) 00:13:25.010 fused_ordering(819) 00:13:25.010 fused_ordering(820) 00:13:25.010 fused_ordering(821) 00:13:25.010 fused_ordering(822) 00:13:25.010 fused_ordering(823) 00:13:25.010 fused_ordering(824) 00:13:25.010 fused_ordering(825) 00:13:25.010 fused_ordering(826) 00:13:25.010 fused_ordering(827) 00:13:25.010 fused_ordering(828) 00:13:25.010 fused_ordering(829) 00:13:25.010 fused_ordering(830) 00:13:25.010 fused_ordering(831) 00:13:25.010 fused_ordering(832) 00:13:25.010 fused_ordering(833) 00:13:25.010 fused_ordering(834) 00:13:25.010 fused_ordering(835) 00:13:25.010 fused_ordering(836) 00:13:25.010 fused_ordering(837) 00:13:25.010 fused_ordering(838) 00:13:25.010 fused_ordering(839) 00:13:25.010 fused_ordering(840) 00:13:25.010 fused_ordering(841) 00:13:25.010 fused_ordering(842) 00:13:25.010 fused_ordering(843) 00:13:25.010 fused_ordering(844) 00:13:25.010 fused_ordering(845) 00:13:25.010 fused_ordering(846) 00:13:25.010 fused_ordering(847) 00:13:25.010 fused_ordering(848) 00:13:25.010 fused_ordering(849) 00:13:25.010 fused_ordering(850) 00:13:25.010 fused_ordering(851) 00:13:25.010 fused_ordering(852) 00:13:25.010 fused_ordering(853) 00:13:25.010 fused_ordering(854) 00:13:25.010 fused_ordering(855) 00:13:25.010 fused_ordering(856) 00:13:25.010 fused_ordering(857) 00:13:25.010 fused_ordering(858) 00:13:25.010 fused_ordering(859) 00:13:25.010 fused_ordering(860) 00:13:25.010 fused_ordering(861) 00:13:25.010 fused_ordering(862) 00:13:25.010 fused_ordering(863) 00:13:25.010 fused_ordering(864) 00:13:25.010 fused_ordering(865) 00:13:25.010 fused_ordering(866) 00:13:25.010 fused_ordering(867) 00:13:25.010 fused_ordering(868) 00:13:25.010 fused_ordering(869) 00:13:25.010 fused_ordering(870) 00:13:25.010 fused_ordering(871) 00:13:25.010 fused_ordering(872) 00:13:25.010 fused_ordering(873) 00:13:25.010 fused_ordering(874) 00:13:25.010 fused_ordering(875) 00:13:25.010 fused_ordering(876) 00:13:25.010 fused_ordering(877) 00:13:25.010 fused_ordering(878) 00:13:25.010 fused_ordering(879) 00:13:25.011 fused_ordering(880) 00:13:25.011 fused_ordering(881) 00:13:25.011 fused_ordering(882) 00:13:25.011 fused_ordering(883) 00:13:25.011 fused_ordering(884) 00:13:25.011 fused_ordering(885) 00:13:25.011 fused_ordering(886) 00:13:25.011 fused_ordering(887) 00:13:25.011 fused_ordering(888) 00:13:25.011 fused_ordering(889) 00:13:25.011 fused_ordering(890) 00:13:25.011 fused_ordering(891) 00:13:25.011 fused_ordering(892) 00:13:25.011 fused_ordering(893) 00:13:25.011 fused_ordering(894) 00:13:25.011 fused_ordering(895) 00:13:25.011 fused_ordering(896) 00:13:25.011 fused_ordering(897) 00:13:25.011 fused_ordering(898) 00:13:25.011 fused_ordering(899) 00:13:25.011 fused_ordering(900) 00:13:25.011 fused_ordering(901) 00:13:25.011 fused_ordering(902) 00:13:25.011 fused_ordering(903) 00:13:25.011 fused_ordering(904) 00:13:25.011 fused_ordering(905) 00:13:25.011 fused_ordering(906) 00:13:25.011 fused_ordering(907) 00:13:25.011 fused_ordering(908) 00:13:25.011 fused_ordering(909) 00:13:25.011 fused_ordering(910) 00:13:25.011 fused_ordering(911) 00:13:25.011 fused_ordering(912) 00:13:25.011 fused_ordering(913) 00:13:25.011 fused_ordering(914) 00:13:25.011 fused_ordering(915) 00:13:25.011 fused_ordering(916) 00:13:25.011 fused_ordering(917) 00:13:25.011 fused_ordering(918) 00:13:25.011 fused_ordering(919) 00:13:25.011 fused_ordering(920) 00:13:25.011 fused_ordering(921) 00:13:25.011 fused_ordering(922) 00:13:25.011 fused_ordering(923) 00:13:25.011 fused_ordering(924) 00:13:25.011 fused_ordering(925) 00:13:25.011 fused_ordering(926) 00:13:25.011 fused_ordering(927) 00:13:25.011 fused_ordering(928) 00:13:25.011 fused_ordering(929) 00:13:25.011 fused_ordering(930) 00:13:25.011 fused_ordering(931) 00:13:25.011 fused_ordering(932) 00:13:25.011 fused_ordering(933) 00:13:25.011 fused_ordering(934) 00:13:25.011 fused_ordering(935) 00:13:25.011 fused_ordering(936) 00:13:25.011 fused_ordering(937) 00:13:25.011 fused_ordering(938) 00:13:25.011 fused_ordering(939) 00:13:25.011 fused_ordering(940) 00:13:25.011 fused_ordering(941) 00:13:25.011 fused_ordering(942) 00:13:25.011 fused_ordering(943) 00:13:25.011 fused_ordering(944) 00:13:25.011 fused_ordering(945) 00:13:25.011 fused_ordering(946) 00:13:25.011 fused_ordering(947) 00:13:25.011 fused_ordering(948) 00:13:25.011 fused_ordering(949) 00:13:25.011 fused_ordering(950) 00:13:25.011 fused_ordering(951) 00:13:25.011 fused_ordering(952) 00:13:25.011 fused_ordering(953) 00:13:25.011 fused_ordering(954) 00:13:25.011 fused_ordering(955) 00:13:25.011 fused_ordering(956) 00:13:25.011 fused_ordering(957) 00:13:25.011 fused_ordering(958) 00:13:25.011 fused_ordering(959) 00:13:25.011 fused_ordering(960) 00:13:25.011 fused_ordering(961) 00:13:25.011 fused_ordering(962) 00:13:25.011 fused_ordering(963) 00:13:25.011 fused_ordering(964) 00:13:25.011 fused_ordering(965) 00:13:25.011 fused_ordering(966) 00:13:25.011 fused_ordering(967) 00:13:25.011 fused_ordering(968) 00:13:25.011 fused_ordering(969) 00:13:25.011 fused_ordering(970) 00:13:25.011 fused_ordering(971) 00:13:25.011 fused_ordering(972) 00:13:25.011 fused_ordering(973) 00:13:25.011 fused_ordering(974) 00:13:25.011 fused_ordering(975) 00:13:25.011 fused_ordering(976) 00:13:25.011 fused_ordering(977) 00:13:25.011 fused_ordering(978) 00:13:25.011 fused_ordering(979) 00:13:25.011 fused_ordering(980) 00:13:25.011 fused_ordering(981) 00:13:25.011 fused_ordering(982) 00:13:25.011 fused_ordering(983) 00:13:25.011 fused_ordering(984) 00:13:25.011 fused_ordering(985) 00:13:25.011 fused_ordering(986) 00:13:25.011 fused_ordering(987) 00:13:25.011 fused_ordering(988) 00:13:25.011 fused_ordering(989) 00:13:25.011 fused_ordering(990) 00:13:25.011 fused_ordering(991) 00:13:25.011 fused_ordering(992) 00:13:25.011 fused_ordering(993) 00:13:25.011 fused_ordering(994) 00:13:25.011 fused_ordering(995) 00:13:25.011 fused_ordering(996) 00:13:25.011 fused_ordering(997) 00:13:25.011 fused_ordering(998) 00:13:25.011 fused_ordering(999) 00:13:25.011 fused_ordering(1000) 00:13:25.011 fused_ordering(1001) 00:13:25.011 fused_ordering(1002) 00:13:25.011 fused_ordering(1003) 00:13:25.011 fused_ordering(1004) 00:13:25.011 fused_ordering(1005) 00:13:25.011 fused_ordering(1006) 00:13:25.011 fused_ordering(1007) 00:13:25.011 fused_ordering(1008) 00:13:25.011 fused_ordering(1009) 00:13:25.011 fused_ordering(1010) 00:13:25.011 fused_ordering(1011) 00:13:25.011 fused_ordering(1012) 00:13:25.011 fused_ordering(1013) 00:13:25.011 fused_ordering(1014) 00:13:25.011 fused_ordering(1015) 00:13:25.011 fused_ordering(1016) 00:13:25.011 fused_ordering(1017) 00:13:25.011 fused_ordering(1018) 00:13:25.011 fused_ordering(1019) 00:13:25.011 fused_ordering(1020) 00:13:25.011 fused_ordering(1021) 00:13:25.011 fused_ordering(1022) 00:13:25.011 fused_ordering(1023) 00:13:25.011 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:25.011 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:25.011 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:25.011 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:25.011 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:13:25.011 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:13:25.011 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:25.011 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:25.011 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:13:25.011 rmmod nvme_rdma 00:13:25.011 rmmod nvme_fabrics 00:13:25.275 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:25.276 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:25.276 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:25.276 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@515 -- # '[' -n 3388754 ']' 00:13:25.276 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # killprocess 3388754 00:13:25.276 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 3388754 ']' 00:13:25.276 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 3388754 00:13:25.276 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:13:25.276 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:25.276 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3388754 00:13:25.276 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:25.276 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:25.276 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3388754' 00:13:25.276 killing process with pid 3388754 00:13:25.276 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 3388754 00:13:25.276 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 3388754 00:13:25.537 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:25.537 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:13:25.537 00:13:25.537 real 0m9.221s 00:13:25.537 user 0m4.876s 00:13:25.537 sys 0m5.777s 00:13:25.537 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:25.537 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:25.537 ************************************ 00:13:25.537 END TEST nvmf_fused_ordering 00:13:25.537 ************************************ 00:13:25.537 18:18:38 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:13:25.537 18:18:38 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:25.537 18:18:38 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:25.537 18:18:38 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:25.537 ************************************ 00:13:25.537 START TEST nvmf_ns_masking 00:13:25.537 ************************************ 00:13:25.537 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:13:25.537 * Looking for test storage... 00:13:25.799 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lcov --version 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:25.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.799 --rc genhtml_branch_coverage=1 00:13:25.799 --rc genhtml_function_coverage=1 00:13:25.799 --rc genhtml_legend=1 00:13:25.799 --rc geninfo_all_blocks=1 00:13:25.799 --rc geninfo_unexecuted_blocks=1 00:13:25.799 00:13:25.799 ' 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:25.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.799 --rc genhtml_branch_coverage=1 00:13:25.799 --rc genhtml_function_coverage=1 00:13:25.799 --rc genhtml_legend=1 00:13:25.799 --rc geninfo_all_blocks=1 00:13:25.799 --rc geninfo_unexecuted_blocks=1 00:13:25.799 00:13:25.799 ' 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:25.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.799 --rc genhtml_branch_coverage=1 00:13:25.799 --rc genhtml_function_coverage=1 00:13:25.799 --rc genhtml_legend=1 00:13:25.799 --rc geninfo_all_blocks=1 00:13:25.799 --rc geninfo_unexecuted_blocks=1 00:13:25.799 00:13:25.799 ' 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:25.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.799 --rc genhtml_branch_coverage=1 00:13:25.799 --rc genhtml_function_coverage=1 00:13:25.799 --rc genhtml_legend=1 00:13:25.799 --rc geninfo_all_blocks=1 00:13:25.799 --rc geninfo_unexecuted_blocks=1 00:13:25.799 00:13:25.799 ' 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.799 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:25.800 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:25.800 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:25.800 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:25.800 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:25.800 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:25.800 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:25.800 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:25.800 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:25.800 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:25.800 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:25.800 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:25.800 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:25.800 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:25.800 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:25.800 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=57f50ffa-c52c-4379-8365-cf0576d8040d 00:13:25.800 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:25.800 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=c5c76a3f-03bb-456b-a296-8b70522d13f3 00:13:25.800 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:25.800 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:25.800 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:25.800 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:25.800 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=e33ee780-8913-48d9-a2ef-2a34ca13c786 00:13:25.800 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:25.800 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:13:25.800 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:25.800 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:25.800 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:25.800 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:25.800 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.800 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:25.800 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.800 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:25.800 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:25.800 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:25.800 18:18:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:32.372 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:32.372 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:13:32.372 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:32.372 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:32.372 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:32.372 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:32.372 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:32.372 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:13:32.372 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:32.372 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:13:32.372 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:13:32.372 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:13:32.372 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:13:32.372 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:13:32.372 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:13:32.372 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:32.372 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:32.372 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:32.372 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:32.372 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:32.372 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:32.372 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:13:32.373 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:13:32.373 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:13:32.373 Found net devices under 0000:18:00.0: mlx_0_0 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:13:32.373 Found net devices under 0000:18:00.1: mlx_0_1 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # is_hw=yes 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # rdma_device_init 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # uname 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe ib_cm 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe ib_core 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe ib_umad 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@70 -- # modprobe iw_cm 00:13:32.373 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:13:32.632 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:13:32.632 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@528 -- # allocate_nic_ips 00:13:32.632 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:32.632 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # get_rdma_if_list 00:13:32.632 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:32.632 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:32.632 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:32.632 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:32.632 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:32.632 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:32.632 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:32.632 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:32.632 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:32.632 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:13:32.632 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:32.632 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:32.632 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:32.632 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:32.632 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:32.632 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:32.632 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:13:32.632 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:32.632 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:13:32.632 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:13:32.633 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:32.633 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:13:32.633 altname enp24s0f0np0 00:13:32.633 altname ens785f0np0 00:13:32.633 inet 192.168.100.8/24 scope global mlx_0_0 00:13:32.633 valid_lft forever preferred_lft forever 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:13:32.633 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:32.633 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:13:32.633 altname enp24s0f1np1 00:13:32.633 altname ens785f1np1 00:13:32.633 inet 192.168.100.9/24 scope global mlx_0_1 00:13:32.633 valid_lft forever preferred_lft forever 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # return 0 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # get_rdma_if_list 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:13:32.633 192.168.100.9' 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:13:32.633 192.168.100.9' 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # head -n 1 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:13:32.633 192.168.100.9' 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # tail -n +2 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # head -n 1 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # nvmfpid=3392044 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # waitforlisten 3392044 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 3392044 ']' 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:32.633 18:18:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:32.633 [2024-10-08 18:18:45.802392] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:13:32.633 [2024-10-08 18:18:45.802455] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:32.892 [2024-10-08 18:18:45.889769] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.892 [2024-10-08 18:18:45.976219] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:32.892 [2024-10-08 18:18:45.976258] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:32.892 [2024-10-08 18:18:45.976268] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:32.892 [2024-10-08 18:18:45.976277] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:32.892 [2024-10-08 18:18:45.976285] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:32.892 [2024-10-08 18:18:45.976743] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.831 18:18:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:33.831 18:18:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:13:33.831 18:18:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:33.831 18:18:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:33.831 18:18:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:33.831 18:18:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:33.831 18:18:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:33.831 [2024-10-08 18:18:46.901328] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xe6a0e0/0xe6e5d0) succeed. 00:13:33.831 [2024-10-08 18:18:46.911951] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe6b5e0/0xeafc70) succeed. 00:13:33.831 18:18:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:33.831 18:18:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:33.831 18:18:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:34.090 Malloc1 00:13:34.090 18:18:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:34.349 Malloc2 00:13:34.349 18:18:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:34.608 18:18:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:34.867 18:18:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:34.867 [2024-10-08 18:18:47.980476] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:34.867 18:18:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:34.867 18:18:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e33ee780-8913-48d9-a2ef-2a34ca13c786 -a 192.168.100.8 -s 4420 -i 4 00:13:35.437 18:18:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:35.437 18:18:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:35.437 18:18:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:35.437 18:18:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:35.437 18:18:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:37.976 18:18:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:37.976 18:18:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:37.976 18:18:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:37.976 18:18:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:37.976 18:18:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:37.976 18:18:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:37.976 18:18:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:37.976 18:18:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:37.976 18:18:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:37.976 18:18:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:37.976 18:18:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:37.976 18:18:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:37.976 18:18:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:37.977 [ 0]:0x1 00:13:37.977 18:18:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:37.977 18:18:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:37.977 18:18:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4d10fa9afb264098a87c4897ba19de5e 00:13:37.977 18:18:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4d10fa9afb264098a87c4897ba19de5e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:37.977 18:18:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:37.977 18:18:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:37.977 18:18:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:37.977 18:18:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:37.977 [ 0]:0x1 00:13:37.977 18:18:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:37.977 18:18:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:37.977 18:18:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4d10fa9afb264098a87c4897ba19de5e 00:13:37.977 18:18:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4d10fa9afb264098a87c4897ba19de5e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:37.977 18:18:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:37.977 18:18:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:37.977 18:18:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:37.977 [ 1]:0x2 00:13:37.977 18:18:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:37.977 18:18:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:37.977 18:18:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9d06124293ef4bffb8da25d2c67e7734 00:13:37.977 18:18:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9d06124293ef4bffb8da25d2c67e7734 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:37.977 18:18:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:37.977 18:18:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:38.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.546 18:18:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.546 18:18:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:38.805 18:18:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:38.805 18:18:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e33ee780-8913-48d9-a2ef-2a34ca13c786 -a 192.168.100.8 -s 4420 -i 4 00:13:39.396 18:18:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:39.396 18:18:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:39.396 18:18:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:39.396 18:18:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:13:39.396 18:18:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:13:39.396 18:18:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:41.301 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:41.301 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:41.301 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:41.301 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:41.301 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:41.301 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:41.301 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:41.301 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:41.301 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:41.301 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:41.301 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:41.301 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:41.302 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:41.302 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:41.302 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:41.302 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:41.302 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:41.302 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:41.302 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:41.302 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:41.302 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:41.302 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:41.561 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:41.561 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:41.561 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:41.561 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:41.561 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:41.561 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:41.561 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:41.561 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:41.561 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:41.561 [ 0]:0x2 00:13:41.561 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:41.561 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:41.561 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9d06124293ef4bffb8da25d2c67e7734 00:13:41.561 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9d06124293ef4bffb8da25d2c67e7734 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:41.561 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:41.819 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:41.819 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:41.819 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:41.819 [ 0]:0x1 00:13:41.819 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:41.819 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:41.819 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4d10fa9afb264098a87c4897ba19de5e 00:13:41.819 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4d10fa9afb264098a87c4897ba19de5e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:41.819 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:41.819 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:41.819 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:41.819 [ 1]:0x2 00:13:41.819 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:41.819 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:41.819 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9d06124293ef4bffb8da25d2c67e7734 00:13:41.819 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9d06124293ef4bffb8da25d2c67e7734 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:41.819 18:18:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:42.077 18:18:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:42.077 18:18:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:42.077 18:18:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:42.077 18:18:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:42.077 18:18:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:42.077 18:18:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:42.077 18:18:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:42.077 18:18:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:42.077 18:18:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:42.077 18:18:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:42.077 18:18:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:42.077 18:18:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:42.077 18:18:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:42.077 18:18:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:42.077 18:18:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:42.077 18:18:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:42.077 18:18:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:42.077 18:18:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:42.077 18:18:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:42.077 18:18:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:42.077 18:18:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:42.077 [ 0]:0x2 00:13:42.077 18:18:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:42.077 18:18:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:42.077 18:18:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9d06124293ef4bffb8da25d2c67e7734 00:13:42.077 18:18:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9d06124293ef4bffb8da25d2c67e7734 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:42.077 18:18:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:42.077 18:18:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:42.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.644 18:18:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:42.644 18:18:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:42.644 18:18:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e33ee780-8913-48d9-a2ef-2a34ca13c786 -a 192.168.100.8 -s 4420 -i 4 00:13:43.212 18:18:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:43.212 18:18:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:43.212 18:18:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:43.212 18:18:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:13:43.212 18:18:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:13:43.212 18:18:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:45.748 [ 0]:0x1 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4d10fa9afb264098a87c4897ba19de5e 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4d10fa9afb264098a87c4897ba19de5e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:45.748 [ 1]:0x2 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9d06124293ef4bffb8da25d2c67e7734 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9d06124293ef4bffb8da25d2c67e7734 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:45.748 [ 0]:0x2 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9d06124293ef4bffb8da25d2c67e7734 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9d06124293ef4bffb8da25d2c67e7734 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:13:45.748 18:18:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:46.007 [2024-10-08 18:18:58.974425] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:46.007 request: 00:13:46.007 { 00:13:46.007 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:46.007 "nsid": 2, 00:13:46.007 "host": "nqn.2016-06.io.spdk:host1", 00:13:46.007 "method": "nvmf_ns_remove_host", 00:13:46.007 "req_id": 1 00:13:46.007 } 00:13:46.007 Got JSON-RPC error response 00:13:46.007 response: 00:13:46.007 { 00:13:46.007 "code": -32602, 00:13:46.007 "message": "Invalid parameters" 00:13:46.007 } 00:13:46.007 18:18:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:46.007 18:18:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:46.007 18:18:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:46.007 18:18:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:46.007 18:18:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:46.007 18:18:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:46.007 18:18:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:46.007 18:18:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:46.007 18:18:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:46.008 18:18:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:46.008 18:18:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:46.008 18:18:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:46.008 18:18:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:46.008 18:18:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:46.008 18:18:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:46.008 18:18:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:46.008 18:18:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:46.008 18:18:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:46.008 18:18:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:46.008 18:18:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:46.008 18:18:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:46.008 18:18:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:46.008 18:18:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:46.008 18:18:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:46.008 18:18:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:46.008 [ 0]:0x2 00:13:46.008 18:18:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:46.008 18:18:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:46.008 18:18:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9d06124293ef4bffb8da25d2c67e7734 00:13:46.008 18:18:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9d06124293ef4bffb8da25d2c67e7734 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:46.008 18:18:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:46.008 18:18:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:46.576 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.576 18:18:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3394039 00:13:46.576 18:18:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:46.576 18:18:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:46.576 18:18:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3394039 /var/tmp/host.sock 00:13:46.576 18:18:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 3394039 ']' 00:13:46.576 18:18:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:13:46.576 18:18:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:46.576 18:18:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:46.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:46.576 18:18:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:46.576 18:18:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:46.576 [2024-10-08 18:18:59.562123] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:13:46.576 [2024-10-08 18:18:59.562188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3394039 ] 00:13:46.576 [2024-10-08 18:18:59.644805] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.576 [2024-10-08 18:18:59.724943] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.513 18:19:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:47.513 18:19:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:13:47.513 18:19:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.513 18:19:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:47.772 18:19:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 57f50ffa-c52c-4379-8365-cf0576d8040d 00:13:47.772 18:19:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:13:47.772 18:19:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 57F50FFAC52C43798365CF0576D8040D -i 00:13:48.031 18:19:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid c5c76a3f-03bb-456b-a296-8b70522d13f3 00:13:48.031 18:19:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:13:48.031 18:19:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g C5C76A3F03BB456BA2968B70522D13F3 -i 00:13:48.290 18:19:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:48.290 18:19:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:48.549 18:19:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:48.549 18:19:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:48.808 nvme0n1 00:13:48.808 18:19:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:48.808 18:19:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:49.067 nvme1n2 00:13:49.067 18:19:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:49.067 18:19:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:49.067 18:19:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:49.067 18:19:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:49.067 18:19:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:49.327 18:19:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:49.327 18:19:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:49.327 18:19:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:49.327 18:19:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:49.586 18:19:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 57f50ffa-c52c-4379-8365-cf0576d8040d == \5\7\f\5\0\f\f\a\-\c\5\2\c\-\4\3\7\9\-\8\3\6\5\-\c\f\0\5\7\6\d\8\0\4\0\d ]] 00:13:49.586 18:19:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:49.586 18:19:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:49.586 18:19:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:49.845 18:19:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ c5c76a3f-03bb-456b-a296-8b70522d13f3 == \c\5\c\7\6\a\3\f\-\0\3\b\b\-\4\5\6\b\-\a\2\9\6\-\8\b\7\0\5\2\2\d\1\3\f\3 ]] 00:13:49.845 18:19:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 3394039 00:13:49.845 18:19:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 3394039 ']' 00:13:49.845 18:19:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 3394039 00:13:49.845 18:19:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:13:49.845 18:19:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:49.845 18:19:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3394039 00:13:49.845 18:19:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:49.845 18:19:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:49.845 18:19:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3394039' 00:13:49.845 killing process with pid 3394039 00:13:49.845 18:19:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 3394039 00:13:49.845 18:19:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 3394039 00:13:50.413 18:19:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:50.413 18:19:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:13:50.413 18:19:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:13:50.413 18:19:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:50.413 18:19:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:50.413 18:19:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:13:50.413 18:19:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:13:50.413 18:19:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:50.413 18:19:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:50.413 18:19:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:13:50.413 rmmod nvme_rdma 00:13:50.413 rmmod nvme_fabrics 00:13:50.413 18:19:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:50.413 18:19:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:50.413 18:19:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:50.413 18:19:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@515 -- # '[' -n 3392044 ']' 00:13:50.413 18:19:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # killprocess 3392044 00:13:50.413 18:19:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 3392044 ']' 00:13:50.413 18:19:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 3392044 00:13:50.413 18:19:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:13:50.413 18:19:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:50.413 18:19:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3392044 00:13:50.672 18:19:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:50.672 18:19:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:50.672 18:19:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3392044' 00:13:50.672 killing process with pid 3392044 00:13:50.672 18:19:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 3392044 00:13:50.672 18:19:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 3392044 00:13:50.931 18:19:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:50.931 18:19:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:13:50.931 00:13:50.931 real 0m25.290s 00:13:50.931 user 0m29.125s 00:13:50.931 sys 0m8.607s 00:13:50.931 18:19:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:50.931 18:19:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:50.931 ************************************ 00:13:50.931 END TEST nvmf_ns_masking 00:13:50.931 ************************************ 00:13:50.931 18:19:03 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:50.931 18:19:03 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:13:50.931 18:19:03 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:50.931 18:19:03 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:50.931 18:19:03 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:50.931 ************************************ 00:13:50.931 START TEST nvmf_nvme_cli 00:13:50.931 ************************************ 00:13:50.931 18:19:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:13:50.931 * Looking for test storage... 00:13:50.931 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:50.931 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:50.931 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lcov --version 00:13:50.931 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:51.191 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:51.191 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:51.191 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:51.191 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:51.191 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:51.191 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:51.191 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:51.191 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:51.191 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:51.191 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:51.191 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:51.191 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:51.191 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:51.191 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:51.191 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:51.191 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:51.191 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:51.191 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:51.191 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:51.191 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:51.191 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:51.191 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:51.191 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:51.191 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:51.191 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:51.191 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:51.191 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:51.191 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:51.191 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:51.191 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:51.191 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:51.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.191 --rc genhtml_branch_coverage=1 00:13:51.191 --rc genhtml_function_coverage=1 00:13:51.191 --rc genhtml_legend=1 00:13:51.191 --rc geninfo_all_blocks=1 00:13:51.191 --rc geninfo_unexecuted_blocks=1 00:13:51.191 00:13:51.191 ' 00:13:51.191 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:51.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.191 --rc genhtml_branch_coverage=1 00:13:51.191 --rc genhtml_function_coverage=1 00:13:51.191 --rc genhtml_legend=1 00:13:51.191 --rc geninfo_all_blocks=1 00:13:51.191 --rc geninfo_unexecuted_blocks=1 00:13:51.191 00:13:51.191 ' 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:51.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.192 --rc genhtml_branch_coverage=1 00:13:51.192 --rc genhtml_function_coverage=1 00:13:51.192 --rc genhtml_legend=1 00:13:51.192 --rc geninfo_all_blocks=1 00:13:51.192 --rc geninfo_unexecuted_blocks=1 00:13:51.192 00:13:51.192 ' 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:51.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.192 --rc genhtml_branch_coverage=1 00:13:51.192 --rc genhtml_function_coverage=1 00:13:51.192 --rc genhtml_legend=1 00:13:51.192 --rc geninfo_all_blocks=1 00:13:51.192 --rc geninfo_unexecuted_blocks=1 00:13:51.192 00:13:51.192 ' 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:51.192 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:51.192 18:19:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:13:57.776 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:13:57.776 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:13:57.776 Found net devices under 0000:18:00.0: mlx_0_0 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:13:57.776 Found net devices under 0000:18:00.1: mlx_0_1 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # is_hw=yes 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # rdma_device_init 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # uname 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe ib_cm 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe ib_core 00:13:57.776 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe ib_umad 00:13:57.777 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:13:57.777 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@70 -- # modprobe iw_cm 00:13:57.777 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:13:57.777 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:13:57.777 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@528 -- # allocate_nic_ips 00:13:57.777 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:58.036 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # get_rdma_if_list 00:13:58.036 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:58.036 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:58.036 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:58.036 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:58.036 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:58.036 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:58.036 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:58.036 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:58.036 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:58.036 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:13:58.036 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:58.036 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:58.036 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:58.036 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:58.036 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:58.036 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:58.036 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:13:58.036 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:58.036 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:13:58.036 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:58.036 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:58.036 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:58.036 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:58.036 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:13:58.036 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:13:58.036 18:19:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:13:58.036 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:58.036 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:13:58.036 altname enp24s0f0np0 00:13:58.036 altname ens785f0np0 00:13:58.036 inet 192.168.100.8/24 scope global mlx_0_0 00:13:58.036 valid_lft forever preferred_lft forever 00:13:58.036 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:58.036 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:13:58.036 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:58.036 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:13:58.037 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:58.037 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:13:58.037 altname enp24s0f1np1 00:13:58.037 altname ens785f1np1 00:13:58.037 inet 192.168.100.9/24 scope global mlx_0_1 00:13:58.037 valid_lft forever preferred_lft forever 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # return 0 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # get_rdma_if_list 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:13:58.037 192.168.100.9' 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:13:58.037 192.168.100.9' 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # head -n 1 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:13:58.037 192.168.100.9' 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # tail -n +2 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # head -n 1 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # nvmfpid=3397536 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # waitforlisten 3397536 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 3397536 ']' 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:58.037 18:19:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:58.037 [2024-10-08 18:19:11.196269] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:13:58.037 [2024-10-08 18:19:11.196334] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:58.296 [2024-10-08 18:19:11.281066] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:58.296 [2024-10-08 18:19:11.370593] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:58.296 [2024-10-08 18:19:11.370633] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:58.296 [2024-10-08 18:19:11.370643] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:58.296 [2024-10-08 18:19:11.370652] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:58.296 [2024-10-08 18:19:11.370660] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:58.296 [2024-10-08 18:19:11.372059] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:58.296 [2024-10-08 18:19:11.372165] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:58.296 [2024-10-08 18:19:11.372264] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.296 [2024-10-08 18:19:11.372266] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:13:59.236 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:59.236 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:13:59.236 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:59.236 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:59.236 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:59.236 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:59.236 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:59.236 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.236 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:59.236 [2024-10-08 18:19:12.131795] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xce72e0/0xceb7d0) succeed. 00:13:59.236 [2024-10-08 18:19:12.142129] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xce8920/0xd2ce70) succeed. 00:13:59.236 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.236 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:59.236 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.236 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:59.236 Malloc0 00:13:59.236 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.236 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:59.236 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.236 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:59.236 Malloc1 00:13:59.236 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.236 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:59.236 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.236 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:59.236 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.236 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:59.236 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.236 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:59.236 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.237 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:59.237 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.237 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:59.237 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.237 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:59.237 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.237 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:59.237 [2024-10-08 18:19:12.353284] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:59.237 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.237 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:59.237 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.237 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:59.237 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.237 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:13:59.917 00:13:59.917 Discovery Log Number of Records 2, Generation counter 2 00:13:59.917 =====Discovery Log Entry 0====== 00:13:59.917 trtype: rdma 00:13:59.917 adrfam: ipv4 00:13:59.917 subtype: current discovery subsystem 00:13:59.917 treq: not required 00:13:59.917 portid: 0 00:13:59.917 trsvcid: 4420 00:13:59.917 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:59.917 traddr: 192.168.100.8 00:13:59.917 eflags: explicit discovery connections, duplicate discovery information 00:13:59.917 rdma_prtype: not specified 00:13:59.917 rdma_qptype: connected 00:13:59.917 rdma_cms: rdma-cm 00:13:59.917 rdma_pkey: 0x0000 00:13:59.917 =====Discovery Log Entry 1====== 00:13:59.917 trtype: rdma 00:13:59.917 adrfam: ipv4 00:13:59.917 subtype: nvme subsystem 00:13:59.917 treq: not required 00:13:59.917 portid: 0 00:13:59.918 trsvcid: 4420 00:13:59.918 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:59.918 traddr: 192.168.100.8 00:13:59.918 eflags: none 00:13:59.918 rdma_prtype: not specified 00:13:59.918 rdma_qptype: connected 00:13:59.918 rdma_cms: rdma-cm 00:13:59.918 rdma_pkey: 0x0000 00:13:59.918 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:59.918 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:59.918 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:13:59.918 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:59.918 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:13:59.918 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:13:59.918 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:59.918 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:13:59.918 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:59.918 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:59.918 18:19:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:00.855 18:19:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:00.855 18:19:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:14:00.855 18:19:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:00.855 18:19:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:00.855 18:19:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:00.855 18:19:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:14:02.797 18:19:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:02.797 18:19:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:02.797 18:19:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:02.797 18:19:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:02.797 18:19:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:02.797 18:19:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:14:02.797 18:19:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:02.797 18:19:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:14:02.797 18:19:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:02.797 18:19:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:14:02.797 18:19:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:14:02.797 18:19:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:02.797 18:19:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:14:02.797 18:19:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:02.797 18:19:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:02.797 18:19:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:14:02.797 18:19:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:02.797 18:19:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:02.797 18:19:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:14:02.797 18:19:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:02.797 18:19:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:02.797 /dev/nvme0n2 ]] 00:14:02.797 18:19:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:02.797 18:19:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:02.797 18:19:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:14:02.797 18:19:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:02.797 18:19:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:14:02.797 18:19:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:14:02.797 18:19:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:02.797 18:19:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:14:02.797 18:19:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:02.797 18:19:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:02.797 18:19:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:14:02.797 18:19:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:02.797 18:19:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:02.797 18:19:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:14:02.797 18:19:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:02.797 18:19:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:02.797 18:19:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:03.736 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.736 18:19:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:03.736 18:19:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:14:03.736 18:19:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:03.736 18:19:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:03.736 18:19:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:03.736 18:19:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:03.736 18:19:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:14:03.736 18:19:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:03.736 18:19:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:03.736 18:19:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.736 18:19:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:03.736 18:19:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.736 18:19:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:03.736 18:19:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:03.736 18:19:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:03.736 18:19:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:03.736 18:19:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:14:03.736 18:19:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:14:03.736 18:19:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:03.736 18:19:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:03.736 18:19:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:14:03.736 rmmod nvme_rdma 00:14:03.736 rmmod nvme_fabrics 00:14:03.736 18:19:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:03.736 18:19:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:03.736 18:19:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:03.736 18:19:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@515 -- # '[' -n 3397536 ']' 00:14:03.736 18:19:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # killprocess 3397536 00:14:03.736 18:19:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 3397536 ']' 00:14:03.736 18:19:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 3397536 00:14:03.736 18:19:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:14:03.736 18:19:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:03.736 18:19:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3397536 00:14:03.995 18:19:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:03.995 18:19:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:03.995 18:19:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3397536' 00:14:03.995 killing process with pid 3397536 00:14:03.995 18:19:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 3397536 00:14:03.995 18:19:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 3397536 00:14:04.255 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:04.255 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:14:04.255 00:14:04.255 real 0m13.268s 00:14:04.255 user 0m25.443s 00:14:04.255 sys 0m6.135s 00:14:04.255 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:04.255 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:04.255 ************************************ 00:14:04.255 END TEST nvmf_nvme_cli 00:14:04.255 ************************************ 00:14:04.255 18:19:17 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:14:04.255 18:19:17 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:14:04.255 18:19:17 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:04.255 18:19:17 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:04.255 18:19:17 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:04.255 ************************************ 00:14:04.255 START TEST nvmf_auth_target 00:14:04.255 ************************************ 00:14:04.255 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:14:04.514 * Looking for test storage... 00:14:04.515 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:04.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.515 --rc genhtml_branch_coverage=1 00:14:04.515 --rc genhtml_function_coverage=1 00:14:04.515 --rc genhtml_legend=1 00:14:04.515 --rc geninfo_all_blocks=1 00:14:04.515 --rc geninfo_unexecuted_blocks=1 00:14:04.515 00:14:04.515 ' 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:04.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.515 --rc genhtml_branch_coverage=1 00:14:04.515 --rc genhtml_function_coverage=1 00:14:04.515 --rc genhtml_legend=1 00:14:04.515 --rc geninfo_all_blocks=1 00:14:04.515 --rc geninfo_unexecuted_blocks=1 00:14:04.515 00:14:04.515 ' 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:04.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.515 --rc genhtml_branch_coverage=1 00:14:04.515 --rc genhtml_function_coverage=1 00:14:04.515 --rc genhtml_legend=1 00:14:04.515 --rc geninfo_all_blocks=1 00:14:04.515 --rc geninfo_unexecuted_blocks=1 00:14:04.515 00:14:04.515 ' 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:04.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.515 --rc genhtml_branch_coverage=1 00:14:04.515 --rc genhtml_function_coverage=1 00:14:04.515 --rc genhtml_legend=1 00:14:04.515 --rc geninfo_all_blocks=1 00:14:04.515 --rc geninfo_unexecuted_blocks=1 00:14:04.515 00:14:04.515 ' 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:04.515 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:04.515 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:04.516 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:04.516 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:04.516 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:04.516 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:14:04.516 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:14:04.516 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:04.516 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:04.516 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:04.516 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:04.516 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.516 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:04.516 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.516 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:04.516 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:04.516 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:14:04.516 18:19:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:14:11.097 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:14:11.097 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:11.097 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:11.098 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:14:11.098 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:11.098 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:11.098 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:14:11.098 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:11.098 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:11.098 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:14:11.098 Found net devices under 0000:18:00.0: mlx_0_0 00:14:11.098 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:11.098 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:11.098 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:11.098 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:14:11.098 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:11.098 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:11.098 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:14:11.098 Found net devices under 0000:18:00.1: mlx_0_1 00:14:11.098 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:11.098 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:11.098 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # is_hw=yes 00:14:11.098 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:11.098 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:14:11.098 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:14:11.098 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # rdma_device_init 00:14:11.098 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:14:11.098 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # uname 00:14:11.098 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:14:11.098 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:14:11.098 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:14:11.098 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:14:11.098 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:14:11.098 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:14:11.098 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:14:11.098 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:14:11.358 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # allocate_nic_ips 00:14:11.358 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:11.358 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:14:11.358 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:11.358 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:11.358 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:11.358 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:11.358 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:11.358 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:11.358 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:11.358 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:11.358 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:11.358 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:14:11.358 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:11.358 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:11.358 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:11.358 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:11.358 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:11.358 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:11.358 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:14:11.358 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:11.358 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:14:11.358 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:11.358 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:11.358 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:11.358 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:14:11.359 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:11.359 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:14:11.359 altname enp24s0f0np0 00:14:11.359 altname ens785f0np0 00:14:11.359 inet 192.168.100.8/24 scope global mlx_0_0 00:14:11.359 valid_lft forever preferred_lft forever 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:14:11.359 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:11.359 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:14:11.359 altname enp24s0f1np1 00:14:11.359 altname ens785f1np1 00:14:11.359 inet 192.168.100.9/24 scope global mlx_0_1 00:14:11.359 valid_lft forever preferred_lft forever 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # return 0 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:14:11.359 192.168.100.9' 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:14:11.359 192.168.100.9' 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # head -n 1 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:14:11.359 192.168.100.9' 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # tail -n +2 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # head -n 1 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=3401263 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 3401263 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3401263 ']' 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:11.359 18:19:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.297 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:12.297 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:14:12.297 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:12.297 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:12.297 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.297 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:12.297 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3401422 00:14:12.297 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:12.297 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:12.297 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:14:12.297 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:14:12.297 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:12.297 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:14:12.297 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:14:12.297 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:14:12.297 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:12.297 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=b80d44d4896cc4d85d53bfae90ec50db33c6ae9c96fca558 00:14:12.297 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:14:12.297 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.BHX 00:14:12.297 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key b80d44d4896cc4d85d53bfae90ec50db33c6ae9c96fca558 0 00:14:12.297 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 b80d44d4896cc4d85d53bfae90ec50db33c6ae9c96fca558 0 00:14:12.298 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:14:12.298 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:14:12.298 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=b80d44d4896cc4d85d53bfae90ec50db33c6ae9c96fca558 00:14:12.298 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:14:12.298 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.BHX 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.BHX 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.BHX 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=94bb03d206a03cfd94f42185e4ef8266c5ee555cfb19e52f9e0373488f0e8043 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.aKz 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 94bb03d206a03cfd94f42185e4ef8266c5ee555cfb19e52f9e0373488f0e8043 3 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 94bb03d206a03cfd94f42185e4ef8266c5ee555cfb19e52f9e0373488f0e8043 3 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=94bb03d206a03cfd94f42185e4ef8266c5ee555cfb19e52f9e0373488f0e8043 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.aKz 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.aKz 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.aKz 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=f0c70dddb5f8e929bf5e84064823d73f 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.biS 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key f0c70dddb5f8e929bf5e84064823d73f 1 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 f0c70dddb5f8e929bf5e84064823d73f 1 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=f0c70dddb5f8e929bf5e84064823d73f 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.biS 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.biS 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.biS 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=77291767d821493d6e383859b51e188adb8355c4e5767c30 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.CAA 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 77291767d821493d6e383859b51e188adb8355c4e5767c30 2 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 77291767d821493d6e383859b51e188adb8355c4e5767c30 2 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=77291767d821493d6e383859b51e188adb8355c4e5767c30 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.CAA 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.CAA 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.CAA 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=1561fa356d37bfb437c75dfbad8d38717c456d70f7c87cdd 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.XGg 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 1561fa356d37bfb437c75dfbad8d38717c456d70f7c87cdd 2 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 1561fa356d37bfb437c75dfbad8d38717c456d70f7c87cdd 2 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:14:12.557 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:14:12.558 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=1561fa356d37bfb437c75dfbad8d38717c456d70f7c87cdd 00:14:12.558 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:14:12.558 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:14:12.816 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.XGg 00:14:12.816 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.XGg 00:14:12.816 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.XGg 00:14:12.816 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:14:12.816 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:14:12.816 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:12.816 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:14:12.816 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:14:12.816 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:14:12.816 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:12.816 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=2be68567c9ece2197cb05eab4d8e586c 00:14:12.816 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:14:12.816 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.Vib 00:14:12.817 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 2be68567c9ece2197cb05eab4d8e586c 1 00:14:12.817 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 2be68567c9ece2197cb05eab4d8e586c 1 00:14:12.817 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:14:12.817 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:14:12.817 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=2be68567c9ece2197cb05eab4d8e586c 00:14:12.817 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:14:12.817 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:14:12.817 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.Vib 00:14:12.817 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.Vib 00:14:12.817 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Vib 00:14:12.817 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:14:12.817 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:14:12.817 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:12.817 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:14:12.817 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:14:12.817 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:14:12.817 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:12.817 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=bc8f976513e31aeecff0bb77a029ff149da52e56615b009a4ba5bd72166b214e 00:14:12.817 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:14:12.817 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.Ro6 00:14:12.817 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key bc8f976513e31aeecff0bb77a029ff149da52e56615b009a4ba5bd72166b214e 3 00:14:12.817 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 bc8f976513e31aeecff0bb77a029ff149da52e56615b009a4ba5bd72166b214e 3 00:14:12.817 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:14:12.817 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:14:12.817 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=bc8f976513e31aeecff0bb77a029ff149da52e56615b009a4ba5bd72166b214e 00:14:12.817 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:14:12.817 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:14:12.817 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.Ro6 00:14:12.817 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.Ro6 00:14:12.817 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Ro6 00:14:12.817 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:14:12.817 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3401263 00:14:12.817 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3401263 ']' 00:14:12.817 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.817 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:12.817 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.817 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:12.817 18:19:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.076 18:19:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:13.076 18:19:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:14:13.076 18:19:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3401422 /var/tmp/host.sock 00:14:13.076 18:19:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3401422 ']' 00:14:13.076 18:19:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:14:13.076 18:19:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:13.076 18:19:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:13.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:13.076 18:19:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:13.076 18:19:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.336 18:19:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:13.336 18:19:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:14:13.336 18:19:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:14:13.336 18:19:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.336 18:19:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.336 18:19:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.336 18:19:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:13.336 18:19:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.BHX 00:14:13.336 18:19:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.336 18:19:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.336 18:19:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.336 18:19:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.BHX 00:14:13.336 18:19:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.BHX 00:14:13.595 18:19:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.aKz ]] 00:14:13.595 18:19:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.aKz 00:14:13.595 18:19:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.595 18:19:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.595 18:19:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.595 18:19:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.aKz 00:14:13.595 18:19:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.aKz 00:14:13.855 18:19:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:13.855 18:19:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.biS 00:14:13.855 18:19:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.855 18:19:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.855 18:19:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.855 18:19:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.biS 00:14:13.855 18:19:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.biS 00:14:14.114 18:19:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.CAA ]] 00:14:14.114 18:19:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.CAA 00:14:14.114 18:19:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.114 18:19:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.114 18:19:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.114 18:19:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.CAA 00:14:14.114 18:19:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.CAA 00:14:14.114 18:19:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:14.114 18:19:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.XGg 00:14:14.114 18:19:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.114 18:19:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.114 18:19:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.114 18:19:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.XGg 00:14:14.114 18:19:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.XGg 00:14:14.372 18:19:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Vib ]] 00:14:14.372 18:19:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Vib 00:14:14.372 18:19:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.372 18:19:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.372 18:19:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.372 18:19:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Vib 00:14:14.372 18:19:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Vib 00:14:14.631 18:19:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:14.631 18:19:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Ro6 00:14:14.631 18:19:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.631 18:19:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.631 18:19:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.631 18:19:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Ro6 00:14:14.631 18:19:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Ro6 00:14:14.890 18:19:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:14:14.890 18:19:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:14.890 18:19:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:14.890 18:19:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:14.890 18:19:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:14.890 18:19:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:15.149 18:19:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:14:15.149 18:19:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:15.149 18:19:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:15.149 18:19:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:15.149 18:19:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:15.149 18:19:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.149 18:19:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:15.149 18:19:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.149 18:19:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.149 18:19:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.149 18:19:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:15.149 18:19:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:15.149 18:19:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:15.407 00:14:15.407 18:19:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:15.407 18:19:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:15.407 18:19:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:15.666 18:19:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.666 18:19:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:15.666 18:19:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.666 18:19:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.666 18:19:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.666 18:19:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:15.666 { 00:14:15.666 "cntlid": 1, 00:14:15.666 "qid": 0, 00:14:15.666 "state": "enabled", 00:14:15.666 "thread": "nvmf_tgt_poll_group_000", 00:14:15.666 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:15.666 "listen_address": { 00:14:15.666 "trtype": "RDMA", 00:14:15.666 "adrfam": "IPv4", 00:14:15.666 "traddr": "192.168.100.8", 00:14:15.666 "trsvcid": "4420" 00:14:15.666 }, 00:14:15.666 "peer_address": { 00:14:15.666 "trtype": "RDMA", 00:14:15.666 "adrfam": "IPv4", 00:14:15.666 "traddr": "192.168.100.8", 00:14:15.666 "trsvcid": "40640" 00:14:15.666 }, 00:14:15.666 "auth": { 00:14:15.666 "state": "completed", 00:14:15.666 "digest": "sha256", 00:14:15.666 "dhgroup": "null" 00:14:15.666 } 00:14:15.666 } 00:14:15.666 ]' 00:14:15.666 18:19:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:15.666 18:19:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:15.666 18:19:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:15.666 18:19:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:15.666 18:19:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:15.666 18:19:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.666 18:19:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.666 18:19:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.926 18:19:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjgwZDQ0ZDQ4OTZjYzRkODVkNTNiZmFlOTBlYzUwZGIzM2M2YWU5Yzk2ZmNhNTU4xFK5dw==: --dhchap-ctrl-secret DHHC-1:03:OTRiYjAzZDIwNmEwM2NmZDk0ZjQyMTg1ZTRlZjgyNjZjNWVlNTU1Y2ZiMTllNTJmOWUwMzczNDg4ZjBlODA0MwRnIXw=: 00:14:15.926 18:19:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjgwZDQ0ZDQ4OTZjYzRkODVkNTNiZmFlOTBlYzUwZGIzM2M2YWU5Yzk2ZmNhNTU4xFK5dw==: --dhchap-ctrl-secret DHHC-1:03:OTRiYjAzZDIwNmEwM2NmZDk0ZjQyMTg1ZTRlZjgyNjZjNWVlNTU1Y2ZiMTllNTJmOWUwMzczNDg4ZjBlODA0MwRnIXw=: 00:14:16.496 18:19:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:16.755 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:16.755 18:19:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:16.755 18:19:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.755 18:19:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.755 18:19:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.755 18:19:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:16.755 18:19:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:16.755 18:19:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:17.015 18:19:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:14:17.015 18:19:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:17.015 18:19:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:17.015 18:19:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:17.015 18:19:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:17.015 18:19:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.015 18:19:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:17.015 18:19:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.015 18:19:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.015 18:19:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.015 18:19:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:17.015 18:19:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:17.015 18:19:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:17.274 00:14:17.274 18:19:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:17.274 18:19:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.274 18:19:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:17.274 18:19:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.274 18:19:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.274 18:19:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.274 18:19:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.274 18:19:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.274 18:19:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:17.274 { 00:14:17.274 "cntlid": 3, 00:14:17.274 "qid": 0, 00:14:17.274 "state": "enabled", 00:14:17.274 "thread": "nvmf_tgt_poll_group_000", 00:14:17.274 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:17.274 "listen_address": { 00:14:17.274 "trtype": "RDMA", 00:14:17.274 "adrfam": "IPv4", 00:14:17.274 "traddr": "192.168.100.8", 00:14:17.274 "trsvcid": "4420" 00:14:17.274 }, 00:14:17.274 "peer_address": { 00:14:17.274 "trtype": "RDMA", 00:14:17.274 "adrfam": "IPv4", 00:14:17.274 "traddr": "192.168.100.8", 00:14:17.274 "trsvcid": "55938" 00:14:17.274 }, 00:14:17.274 "auth": { 00:14:17.274 "state": "completed", 00:14:17.274 "digest": "sha256", 00:14:17.274 "dhgroup": "null" 00:14:17.274 } 00:14:17.274 } 00:14:17.274 ]' 00:14:17.274 18:19:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:17.533 18:19:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:17.533 18:19:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:17.533 18:19:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:17.533 18:19:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:17.533 18:19:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.533 18:19:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.533 18:19:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.792 18:19:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjBjNzBkZGRiNWY4ZTkyOWJmNWU4NDA2NDgyM2Q3M2a16XEW: --dhchap-ctrl-secret DHHC-1:02:NzcyOTE3NjdkODIxNDkzZDZlMzgzODU5YjUxZTE4OGFkYjgzNTVjNGU1NzY3YzMwpTruLA==: 00:14:17.792 18:19:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjBjNzBkZGRiNWY4ZTkyOWJmNWU4NDA2NDgyM2Q3M2a16XEW: --dhchap-ctrl-secret DHHC-1:02:NzcyOTE3NjdkODIxNDkzZDZlMzgzODU5YjUxZTE4OGFkYjgzNTVjNGU1NzY3YzMwpTruLA==: 00:14:18.360 18:19:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.619 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.619 18:19:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:18.619 18:19:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.619 18:19:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.619 18:19:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.619 18:19:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:18.619 18:19:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:18.619 18:19:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:18.878 18:19:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:14:18.878 18:19:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:18.878 18:19:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:18.878 18:19:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:18.878 18:19:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:18.878 18:19:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.878 18:19:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:18.878 18:19:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.878 18:19:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.878 18:19:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.878 18:19:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:18.878 18:19:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:18.878 18:19:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:19.137 00:14:19.137 18:19:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:19.137 18:19:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:19.138 18:19:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.138 18:19:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.138 18:19:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.138 18:19:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.138 18:19:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.138 18:19:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.138 18:19:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:19.138 { 00:14:19.138 "cntlid": 5, 00:14:19.138 "qid": 0, 00:14:19.138 "state": "enabled", 00:14:19.138 "thread": "nvmf_tgt_poll_group_000", 00:14:19.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:19.138 "listen_address": { 00:14:19.138 "trtype": "RDMA", 00:14:19.138 "adrfam": "IPv4", 00:14:19.138 "traddr": "192.168.100.8", 00:14:19.138 "trsvcid": "4420" 00:14:19.138 }, 00:14:19.138 "peer_address": { 00:14:19.138 "trtype": "RDMA", 00:14:19.138 "adrfam": "IPv4", 00:14:19.138 "traddr": "192.168.100.8", 00:14:19.138 "trsvcid": "37843" 00:14:19.138 }, 00:14:19.138 "auth": { 00:14:19.138 "state": "completed", 00:14:19.138 "digest": "sha256", 00:14:19.138 "dhgroup": "null" 00:14:19.138 } 00:14:19.138 } 00:14:19.138 ]' 00:14:19.138 18:19:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:19.397 18:19:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:19.397 18:19:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:19.397 18:19:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:19.397 18:19:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:19.397 18:19:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:19.397 18:19:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:19.397 18:19:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:19.656 18:19:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: --dhchap-ctrl-secret DHHC-1:01:MmJlNjg1NjdjOWVjZTIxOTdjYjA1ZWFiNGQ4ZTU4NmOF4rQ8: 00:14:19.656 18:19:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: --dhchap-ctrl-secret DHHC-1:01:MmJlNjg1NjdjOWVjZTIxOTdjYjA1ZWFiNGQ4ZTU4NmOF4rQ8: 00:14:20.222 18:19:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.481 18:19:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:20.481 18:19:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.481 18:19:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.481 18:19:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.481 18:19:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:20.481 18:19:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:20.481 18:19:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:20.481 18:19:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:14:20.481 18:19:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:20.481 18:19:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:20.481 18:19:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:20.481 18:19:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:20.481 18:19:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.481 18:19:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key3 00:14:20.481 18:19:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.481 18:19:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.481 18:19:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.481 18:19:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:20.481 18:19:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:20.481 18:19:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:20.740 00:14:20.740 18:19:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:20.740 18:19:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:20.740 18:19:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:20.999 18:19:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:20.999 18:19:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:20.999 18:19:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.999 18:19:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.999 18:19:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.999 18:19:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:20.999 { 00:14:20.999 "cntlid": 7, 00:14:20.999 "qid": 0, 00:14:20.999 "state": "enabled", 00:14:20.999 "thread": "nvmf_tgt_poll_group_000", 00:14:20.999 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:20.999 "listen_address": { 00:14:20.999 "trtype": "RDMA", 00:14:20.999 "adrfam": "IPv4", 00:14:20.999 "traddr": "192.168.100.8", 00:14:20.999 "trsvcid": "4420" 00:14:20.999 }, 00:14:20.999 "peer_address": { 00:14:20.999 "trtype": "RDMA", 00:14:20.999 "adrfam": "IPv4", 00:14:20.999 "traddr": "192.168.100.8", 00:14:20.999 "trsvcid": "34206" 00:14:20.999 }, 00:14:20.999 "auth": { 00:14:20.999 "state": "completed", 00:14:20.999 "digest": "sha256", 00:14:20.999 "dhgroup": "null" 00:14:20.999 } 00:14:20.999 } 00:14:20.999 ]' 00:14:20.999 18:19:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:20.999 18:19:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:20.999 18:19:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:21.258 18:19:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:21.258 18:19:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:21.258 18:19:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:21.258 18:19:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:21.258 18:19:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.517 18:19:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmM4Zjk3NjUxM2UzMWFlZWNmZjBiYjc3YTAyOWZmMTQ5ZGE1MmU1NjYxNWIwMDlhNGJhNWJkNzIxNjZiMjE0ZfUxUC4=: 00:14:21.517 18:19:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmM4Zjk3NjUxM2UzMWFlZWNmZjBiYjc3YTAyOWZmMTQ5ZGE1MmU1NjYxNWIwMDlhNGJhNWJkNzIxNjZiMjE0ZfUxUC4=: 00:14:22.084 18:19:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.084 18:19:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:22.084 18:19:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.084 18:19:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.084 18:19:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.084 18:19:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:22.084 18:19:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:22.084 18:19:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:22.084 18:19:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:22.343 18:19:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:14:22.343 18:19:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:22.343 18:19:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:22.343 18:19:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:22.343 18:19:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:22.344 18:19:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.344 18:19:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.344 18:19:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.344 18:19:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.344 18:19:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.344 18:19:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.344 18:19:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.344 18:19:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.601 00:14:22.601 18:19:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:22.601 18:19:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:22.601 18:19:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.860 18:19:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.860 18:19:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.860 18:19:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.860 18:19:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.860 18:19:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.860 18:19:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:22.860 { 00:14:22.860 "cntlid": 9, 00:14:22.860 "qid": 0, 00:14:22.860 "state": "enabled", 00:14:22.860 "thread": "nvmf_tgt_poll_group_000", 00:14:22.860 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:22.860 "listen_address": { 00:14:22.860 "trtype": "RDMA", 00:14:22.860 "adrfam": "IPv4", 00:14:22.860 "traddr": "192.168.100.8", 00:14:22.860 "trsvcid": "4420" 00:14:22.860 }, 00:14:22.860 "peer_address": { 00:14:22.860 "trtype": "RDMA", 00:14:22.860 "adrfam": "IPv4", 00:14:22.860 "traddr": "192.168.100.8", 00:14:22.860 "trsvcid": "33076" 00:14:22.860 }, 00:14:22.860 "auth": { 00:14:22.860 "state": "completed", 00:14:22.860 "digest": "sha256", 00:14:22.860 "dhgroup": "ffdhe2048" 00:14:22.860 } 00:14:22.860 } 00:14:22.860 ]' 00:14:22.860 18:19:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:22.860 18:19:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:22.860 18:19:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:22.860 18:19:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:22.860 18:19:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:23.120 18:19:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.120 18:19:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.120 18:19:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.120 18:19:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjgwZDQ0ZDQ4OTZjYzRkODVkNTNiZmFlOTBlYzUwZGIzM2M2YWU5Yzk2ZmNhNTU4xFK5dw==: --dhchap-ctrl-secret DHHC-1:03:OTRiYjAzZDIwNmEwM2NmZDk0ZjQyMTg1ZTRlZjgyNjZjNWVlNTU1Y2ZiMTllNTJmOWUwMzczNDg4ZjBlODA0MwRnIXw=: 00:14:23.120 18:19:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjgwZDQ0ZDQ4OTZjYzRkODVkNTNiZmFlOTBlYzUwZGIzM2M2YWU5Yzk2ZmNhNTU4xFK5dw==: --dhchap-ctrl-secret DHHC-1:03:OTRiYjAzZDIwNmEwM2NmZDk0ZjQyMTg1ZTRlZjgyNjZjNWVlNTU1Y2ZiMTllNTJmOWUwMzczNDg4ZjBlODA0MwRnIXw=: 00:14:24.057 18:19:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:24.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:24.057 18:19:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:24.057 18:19:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.057 18:19:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.057 18:19:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.057 18:19:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:24.057 18:19:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:24.057 18:19:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:24.317 18:19:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:14:24.317 18:19:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:24.317 18:19:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:24.317 18:19:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:24.317 18:19:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:24.317 18:19:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:24.317 18:19:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.317 18:19:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.317 18:19:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.317 18:19:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.317 18:19:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.317 18:19:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.317 18:19:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.576 00:14:24.576 18:19:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:24.576 18:19:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:24.576 18:19:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.576 18:19:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.576 18:19:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.576 18:19:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.576 18:19:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.836 18:19:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.836 18:19:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:24.836 { 00:14:24.836 "cntlid": 11, 00:14:24.836 "qid": 0, 00:14:24.836 "state": "enabled", 00:14:24.836 "thread": "nvmf_tgt_poll_group_000", 00:14:24.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:24.836 "listen_address": { 00:14:24.836 "trtype": "RDMA", 00:14:24.836 "adrfam": "IPv4", 00:14:24.836 "traddr": "192.168.100.8", 00:14:24.836 "trsvcid": "4420" 00:14:24.836 }, 00:14:24.836 "peer_address": { 00:14:24.836 "trtype": "RDMA", 00:14:24.836 "adrfam": "IPv4", 00:14:24.836 "traddr": "192.168.100.8", 00:14:24.836 "trsvcid": "45981" 00:14:24.836 }, 00:14:24.836 "auth": { 00:14:24.836 "state": "completed", 00:14:24.836 "digest": "sha256", 00:14:24.836 "dhgroup": "ffdhe2048" 00:14:24.836 } 00:14:24.836 } 00:14:24.836 ]' 00:14:24.836 18:19:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:24.836 18:19:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:24.836 18:19:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:24.836 18:19:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:24.836 18:19:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:24.836 18:19:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.836 18:19:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.836 18:19:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:25.095 18:19:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjBjNzBkZGRiNWY4ZTkyOWJmNWU4NDA2NDgyM2Q3M2a16XEW: --dhchap-ctrl-secret DHHC-1:02:NzcyOTE3NjdkODIxNDkzZDZlMzgzODU5YjUxZTE4OGFkYjgzNTVjNGU1NzY3YzMwpTruLA==: 00:14:25.095 18:19:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjBjNzBkZGRiNWY4ZTkyOWJmNWU4NDA2NDgyM2Q3M2a16XEW: --dhchap-ctrl-secret DHHC-1:02:NzcyOTE3NjdkODIxNDkzZDZlMzgzODU5YjUxZTE4OGFkYjgzNTVjNGU1NzY3YzMwpTruLA==: 00:14:25.664 18:19:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.924 18:19:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:25.924 18:19:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.924 18:19:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.924 18:19:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.924 18:19:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:25.924 18:19:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:25.924 18:19:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:25.924 18:19:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:14:25.924 18:19:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:25.924 18:19:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:25.924 18:19:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:25.924 18:19:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:25.924 18:19:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.924 18:19:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.924 18:19:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.924 18:19:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.183 18:19:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.183 18:19:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:26.183 18:19:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:26.183 18:19:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:26.183 00:14:26.442 18:19:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:26.442 18:19:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:26.442 18:19:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.442 18:19:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.442 18:19:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.442 18:19:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.442 18:19:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.442 18:19:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.442 18:19:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:26.442 { 00:14:26.442 "cntlid": 13, 00:14:26.442 "qid": 0, 00:14:26.442 "state": "enabled", 00:14:26.442 "thread": "nvmf_tgt_poll_group_000", 00:14:26.442 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:26.442 "listen_address": { 00:14:26.442 "trtype": "RDMA", 00:14:26.442 "adrfam": "IPv4", 00:14:26.442 "traddr": "192.168.100.8", 00:14:26.442 "trsvcid": "4420" 00:14:26.442 }, 00:14:26.442 "peer_address": { 00:14:26.442 "trtype": "RDMA", 00:14:26.442 "adrfam": "IPv4", 00:14:26.442 "traddr": "192.168.100.8", 00:14:26.442 "trsvcid": "59865" 00:14:26.442 }, 00:14:26.442 "auth": { 00:14:26.442 "state": "completed", 00:14:26.442 "digest": "sha256", 00:14:26.442 "dhgroup": "ffdhe2048" 00:14:26.442 } 00:14:26.442 } 00:14:26.442 ]' 00:14:26.442 18:19:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:26.702 18:19:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:26.702 18:19:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:26.702 18:19:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:26.702 18:19:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:26.702 18:19:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.702 18:19:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.702 18:19:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.960 18:19:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: --dhchap-ctrl-secret DHHC-1:01:MmJlNjg1NjdjOWVjZTIxOTdjYjA1ZWFiNGQ4ZTU4NmOF4rQ8: 00:14:26.960 18:19:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: --dhchap-ctrl-secret DHHC-1:01:MmJlNjg1NjdjOWVjZTIxOTdjYjA1ZWFiNGQ4ZTU4NmOF4rQ8: 00:14:27.527 18:19:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.787 18:19:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:27.787 18:19:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.787 18:19:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.787 18:19:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.787 18:19:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:27.787 18:19:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:27.787 18:19:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:27.787 18:19:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:14:27.787 18:19:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:27.787 18:19:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:27.787 18:19:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:27.787 18:19:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:27.787 18:19:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.787 18:19:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key3 00:14:27.787 18:19:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.787 18:19:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.787 18:19:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.787 18:19:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:27.787 18:19:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:27.787 18:19:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:28.046 00:14:28.306 18:19:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:28.306 18:19:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:28.306 18:19:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.306 18:19:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.306 18:19:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.306 18:19:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.306 18:19:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.306 18:19:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.306 18:19:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:28.306 { 00:14:28.306 "cntlid": 15, 00:14:28.306 "qid": 0, 00:14:28.306 "state": "enabled", 00:14:28.306 "thread": "nvmf_tgt_poll_group_000", 00:14:28.306 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:28.306 "listen_address": { 00:14:28.306 "trtype": "RDMA", 00:14:28.306 "adrfam": "IPv4", 00:14:28.306 "traddr": "192.168.100.8", 00:14:28.306 "trsvcid": "4420" 00:14:28.306 }, 00:14:28.306 "peer_address": { 00:14:28.306 "trtype": "RDMA", 00:14:28.306 "adrfam": "IPv4", 00:14:28.306 "traddr": "192.168.100.8", 00:14:28.306 "trsvcid": "42250" 00:14:28.306 }, 00:14:28.306 "auth": { 00:14:28.306 "state": "completed", 00:14:28.306 "digest": "sha256", 00:14:28.306 "dhgroup": "ffdhe2048" 00:14:28.306 } 00:14:28.306 } 00:14:28.306 ]' 00:14:28.306 18:19:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:28.306 18:19:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:28.306 18:19:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:28.565 18:19:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:28.565 18:19:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:28.565 18:19:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:28.565 18:19:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.565 18:19:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:28.825 18:19:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmM4Zjk3NjUxM2UzMWFlZWNmZjBiYjc3YTAyOWZmMTQ5ZGE1MmU1NjYxNWIwMDlhNGJhNWJkNzIxNjZiMjE0ZfUxUC4=: 00:14:28.825 18:19:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmM4Zjk3NjUxM2UzMWFlZWNmZjBiYjc3YTAyOWZmMTQ5ZGE1MmU1NjYxNWIwMDlhNGJhNWJkNzIxNjZiMjE0ZfUxUC4=: 00:14:29.393 18:19:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:29.393 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:29.393 18:19:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:29.393 18:19:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.393 18:19:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.393 18:19:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.393 18:19:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:29.393 18:19:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:29.393 18:19:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:29.393 18:19:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:29.652 18:19:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:14:29.652 18:19:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:29.652 18:19:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:29.652 18:19:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:29.652 18:19:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:29.652 18:19:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:29.652 18:19:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.652 18:19:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.652 18:19:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.652 18:19:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.652 18:19:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.652 18:19:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.652 18:19:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.911 00:14:29.911 18:19:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:29.911 18:19:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:29.911 18:19:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.170 18:19:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.170 18:19:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.170 18:19:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.171 18:19:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.171 18:19:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.171 18:19:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:30.171 { 00:14:30.171 "cntlid": 17, 00:14:30.171 "qid": 0, 00:14:30.171 "state": "enabled", 00:14:30.171 "thread": "nvmf_tgt_poll_group_000", 00:14:30.171 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:30.171 "listen_address": { 00:14:30.171 "trtype": "RDMA", 00:14:30.171 "adrfam": "IPv4", 00:14:30.171 "traddr": "192.168.100.8", 00:14:30.171 "trsvcid": "4420" 00:14:30.171 }, 00:14:30.171 "peer_address": { 00:14:30.171 "trtype": "RDMA", 00:14:30.171 "adrfam": "IPv4", 00:14:30.171 "traddr": "192.168.100.8", 00:14:30.171 "trsvcid": "49469" 00:14:30.171 }, 00:14:30.171 "auth": { 00:14:30.171 "state": "completed", 00:14:30.171 "digest": "sha256", 00:14:30.171 "dhgroup": "ffdhe3072" 00:14:30.171 } 00:14:30.171 } 00:14:30.171 ]' 00:14:30.171 18:19:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:30.171 18:19:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:30.171 18:19:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:30.430 18:19:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:30.430 18:19:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:30.430 18:19:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.430 18:19:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.430 18:19:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:30.430 18:19:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjgwZDQ0ZDQ4OTZjYzRkODVkNTNiZmFlOTBlYzUwZGIzM2M2YWU5Yzk2ZmNhNTU4xFK5dw==: --dhchap-ctrl-secret DHHC-1:03:OTRiYjAzZDIwNmEwM2NmZDk0ZjQyMTg1ZTRlZjgyNjZjNWVlNTU1Y2ZiMTllNTJmOWUwMzczNDg4ZjBlODA0MwRnIXw=: 00:14:30.430 18:19:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjgwZDQ0ZDQ4OTZjYzRkODVkNTNiZmFlOTBlYzUwZGIzM2M2YWU5Yzk2ZmNhNTU4xFK5dw==: --dhchap-ctrl-secret DHHC-1:03:OTRiYjAzZDIwNmEwM2NmZDk0ZjQyMTg1ZTRlZjgyNjZjNWVlNTU1Y2ZiMTllNTJmOWUwMzczNDg4ZjBlODA0MwRnIXw=: 00:14:31.365 18:19:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.365 18:19:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:31.365 18:19:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.365 18:19:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.365 18:19:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.365 18:19:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:31.365 18:19:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:31.365 18:19:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:31.639 18:19:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:14:31.639 18:19:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:31.639 18:19:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:31.639 18:19:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:31.639 18:19:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:31.639 18:19:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:31.639 18:19:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:31.639 18:19:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.639 18:19:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.639 18:19:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.639 18:19:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:31.639 18:19:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:31.639 18:19:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:31.970 00:14:31.970 18:19:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:31.970 18:19:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:31.970 18:19:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:31.970 18:19:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.970 18:19:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:31.970 18:19:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.970 18:19:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.970 18:19:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.970 18:19:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:31.970 { 00:14:31.970 "cntlid": 19, 00:14:31.970 "qid": 0, 00:14:31.970 "state": "enabled", 00:14:31.970 "thread": "nvmf_tgt_poll_group_000", 00:14:31.970 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:31.970 "listen_address": { 00:14:31.970 "trtype": "RDMA", 00:14:31.970 "adrfam": "IPv4", 00:14:31.970 "traddr": "192.168.100.8", 00:14:31.970 "trsvcid": "4420" 00:14:31.970 }, 00:14:31.970 "peer_address": { 00:14:31.970 "trtype": "RDMA", 00:14:31.970 "adrfam": "IPv4", 00:14:31.970 "traddr": "192.168.100.8", 00:14:31.970 "trsvcid": "54442" 00:14:31.970 }, 00:14:31.970 "auth": { 00:14:31.970 "state": "completed", 00:14:31.970 "digest": "sha256", 00:14:31.970 "dhgroup": "ffdhe3072" 00:14:31.970 } 00:14:31.970 } 00:14:31.970 ]' 00:14:31.970 18:19:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:32.230 18:19:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:32.230 18:19:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:32.230 18:19:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:32.230 18:19:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:32.230 18:19:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.230 18:19:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.230 18:19:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.489 18:19:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjBjNzBkZGRiNWY4ZTkyOWJmNWU4NDA2NDgyM2Q3M2a16XEW: --dhchap-ctrl-secret DHHC-1:02:NzcyOTE3NjdkODIxNDkzZDZlMzgzODU5YjUxZTE4OGFkYjgzNTVjNGU1NzY3YzMwpTruLA==: 00:14:32.489 18:19:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjBjNzBkZGRiNWY4ZTkyOWJmNWU4NDA2NDgyM2Q3M2a16XEW: --dhchap-ctrl-secret DHHC-1:02:NzcyOTE3NjdkODIxNDkzZDZlMzgzODU5YjUxZTE4OGFkYjgzNTVjNGU1NzY3YzMwpTruLA==: 00:14:33.056 18:19:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.056 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.056 18:19:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:33.056 18:19:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.056 18:19:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.056 18:19:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.056 18:19:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:33.056 18:19:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:33.056 18:19:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:33.315 18:19:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:14:33.315 18:19:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:33.315 18:19:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:33.315 18:19:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:33.315 18:19:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:33.315 18:19:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.315 18:19:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:33.315 18:19:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.315 18:19:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.315 18:19:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.315 18:19:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:33.315 18:19:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:33.315 18:19:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:33.575 00:14:33.575 18:19:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:33.575 18:19:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.575 18:19:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:33.834 18:19:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.834 18:19:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.834 18:19:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.834 18:19:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.834 18:19:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.834 18:19:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:33.834 { 00:14:33.834 "cntlid": 21, 00:14:33.834 "qid": 0, 00:14:33.834 "state": "enabled", 00:14:33.834 "thread": "nvmf_tgt_poll_group_000", 00:14:33.834 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:33.834 "listen_address": { 00:14:33.834 "trtype": "RDMA", 00:14:33.834 "adrfam": "IPv4", 00:14:33.834 "traddr": "192.168.100.8", 00:14:33.834 "trsvcid": "4420" 00:14:33.834 }, 00:14:33.834 "peer_address": { 00:14:33.834 "trtype": "RDMA", 00:14:33.834 "adrfam": "IPv4", 00:14:33.834 "traddr": "192.168.100.8", 00:14:33.834 "trsvcid": "51207" 00:14:33.834 }, 00:14:33.834 "auth": { 00:14:33.834 "state": "completed", 00:14:33.834 "digest": "sha256", 00:14:33.834 "dhgroup": "ffdhe3072" 00:14:33.834 } 00:14:33.834 } 00:14:33.834 ]' 00:14:33.834 18:19:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:33.834 18:19:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:33.834 18:19:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:33.834 18:19:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:33.834 18:19:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:34.093 18:19:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.093 18:19:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.093 18:19:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.093 18:19:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: --dhchap-ctrl-secret DHHC-1:01:MmJlNjg1NjdjOWVjZTIxOTdjYjA1ZWFiNGQ4ZTU4NmOF4rQ8: 00:14:34.093 18:19:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: --dhchap-ctrl-secret DHHC-1:01:MmJlNjg1NjdjOWVjZTIxOTdjYjA1ZWFiNGQ4ZTU4NmOF4rQ8: 00:14:35.031 18:19:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:35.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:35.031 18:19:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:35.031 18:19:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.031 18:19:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.031 18:19:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.031 18:19:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:35.031 18:19:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:35.031 18:19:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:35.291 18:19:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:14:35.291 18:19:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:35.291 18:19:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:35.291 18:19:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:35.291 18:19:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:35.291 18:19:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:35.291 18:19:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key3 00:14:35.291 18:19:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.291 18:19:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.291 18:19:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.291 18:19:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:35.291 18:19:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:35.291 18:19:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:35.550 00:14:35.550 18:19:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:35.550 18:19:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:35.550 18:19:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.808 18:19:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.808 18:19:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.808 18:19:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.808 18:19:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.808 18:19:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.808 18:19:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:35.808 { 00:14:35.808 "cntlid": 23, 00:14:35.808 "qid": 0, 00:14:35.808 "state": "enabled", 00:14:35.808 "thread": "nvmf_tgt_poll_group_000", 00:14:35.808 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:35.808 "listen_address": { 00:14:35.808 "trtype": "RDMA", 00:14:35.808 "adrfam": "IPv4", 00:14:35.808 "traddr": "192.168.100.8", 00:14:35.809 "trsvcid": "4420" 00:14:35.809 }, 00:14:35.809 "peer_address": { 00:14:35.809 "trtype": "RDMA", 00:14:35.809 "adrfam": "IPv4", 00:14:35.809 "traddr": "192.168.100.8", 00:14:35.809 "trsvcid": "33874" 00:14:35.809 }, 00:14:35.809 "auth": { 00:14:35.809 "state": "completed", 00:14:35.809 "digest": "sha256", 00:14:35.809 "dhgroup": "ffdhe3072" 00:14:35.809 } 00:14:35.809 } 00:14:35.809 ]' 00:14:35.809 18:19:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:35.809 18:19:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:35.809 18:19:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:35.809 18:19:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:35.809 18:19:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:35.809 18:19:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.809 18:19:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.809 18:19:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:36.067 18:19:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmM4Zjk3NjUxM2UzMWFlZWNmZjBiYjc3YTAyOWZmMTQ5ZGE1MmU1NjYxNWIwMDlhNGJhNWJkNzIxNjZiMjE0ZfUxUC4=: 00:14:36.067 18:19:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmM4Zjk3NjUxM2UzMWFlZWNmZjBiYjc3YTAyOWZmMTQ5ZGE1MmU1NjYxNWIwMDlhNGJhNWJkNzIxNjZiMjE0ZfUxUC4=: 00:14:36.635 18:19:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.894 18:19:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:36.894 18:19:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.894 18:19:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.894 18:19:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.894 18:19:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:36.894 18:19:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:36.894 18:19:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:36.894 18:19:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:37.153 18:19:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:14:37.153 18:19:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:37.153 18:19:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:37.153 18:19:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:37.153 18:19:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:37.153 18:19:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.153 18:19:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.153 18:19:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.153 18:19:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.153 18:19:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.153 18:19:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.153 18:19:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.153 18:19:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.412 00:14:37.412 18:19:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:37.412 18:19:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:37.412 18:19:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.671 18:19:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.671 18:19:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.671 18:19:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.671 18:19:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.671 18:19:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.671 18:19:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:37.671 { 00:14:37.671 "cntlid": 25, 00:14:37.671 "qid": 0, 00:14:37.671 "state": "enabled", 00:14:37.671 "thread": "nvmf_tgt_poll_group_000", 00:14:37.671 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:37.671 "listen_address": { 00:14:37.671 "trtype": "RDMA", 00:14:37.671 "adrfam": "IPv4", 00:14:37.671 "traddr": "192.168.100.8", 00:14:37.671 "trsvcid": "4420" 00:14:37.671 }, 00:14:37.671 "peer_address": { 00:14:37.671 "trtype": "RDMA", 00:14:37.671 "adrfam": "IPv4", 00:14:37.671 "traddr": "192.168.100.8", 00:14:37.671 "trsvcid": "45700" 00:14:37.671 }, 00:14:37.671 "auth": { 00:14:37.671 "state": "completed", 00:14:37.671 "digest": "sha256", 00:14:37.671 "dhgroup": "ffdhe4096" 00:14:37.671 } 00:14:37.671 } 00:14:37.671 ]' 00:14:37.671 18:19:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:37.671 18:19:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:37.671 18:19:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:37.671 18:19:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:37.671 18:19:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:37.671 18:19:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.671 18:19:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.671 18:19:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.930 18:19:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjgwZDQ0ZDQ4OTZjYzRkODVkNTNiZmFlOTBlYzUwZGIzM2M2YWU5Yzk2ZmNhNTU4xFK5dw==: --dhchap-ctrl-secret DHHC-1:03:OTRiYjAzZDIwNmEwM2NmZDk0ZjQyMTg1ZTRlZjgyNjZjNWVlNTU1Y2ZiMTllNTJmOWUwMzczNDg4ZjBlODA0MwRnIXw=: 00:14:37.930 18:19:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjgwZDQ0ZDQ4OTZjYzRkODVkNTNiZmFlOTBlYzUwZGIzM2M2YWU5Yzk2ZmNhNTU4xFK5dw==: --dhchap-ctrl-secret DHHC-1:03:OTRiYjAzZDIwNmEwM2NmZDk0ZjQyMTg1ZTRlZjgyNjZjNWVlNTU1Y2ZiMTllNTJmOWUwMzczNDg4ZjBlODA0MwRnIXw=: 00:14:38.866 18:19:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.866 18:19:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:38.866 18:19:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.866 18:19:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.866 18:19:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.866 18:19:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:38.866 18:19:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:38.866 18:19:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:39.125 18:19:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:14:39.125 18:19:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:39.125 18:19:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:39.125 18:19:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:39.125 18:19:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:39.125 18:19:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.125 18:19:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.125 18:19:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.125 18:19:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.125 18:19:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.125 18:19:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.125 18:19:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.125 18:19:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.384 00:14:39.384 18:19:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:39.384 18:19:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:39.384 18:19:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.643 18:19:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.643 18:19:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.643 18:19:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.643 18:19:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.643 18:19:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.643 18:19:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:39.643 { 00:14:39.643 "cntlid": 27, 00:14:39.643 "qid": 0, 00:14:39.643 "state": "enabled", 00:14:39.643 "thread": "nvmf_tgt_poll_group_000", 00:14:39.643 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:39.643 "listen_address": { 00:14:39.643 "trtype": "RDMA", 00:14:39.643 "adrfam": "IPv4", 00:14:39.643 "traddr": "192.168.100.8", 00:14:39.643 "trsvcid": "4420" 00:14:39.643 }, 00:14:39.643 "peer_address": { 00:14:39.643 "trtype": "RDMA", 00:14:39.643 "adrfam": "IPv4", 00:14:39.643 "traddr": "192.168.100.8", 00:14:39.643 "trsvcid": "37111" 00:14:39.643 }, 00:14:39.643 "auth": { 00:14:39.643 "state": "completed", 00:14:39.643 "digest": "sha256", 00:14:39.643 "dhgroup": "ffdhe4096" 00:14:39.643 } 00:14:39.643 } 00:14:39.643 ]' 00:14:39.643 18:19:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:39.643 18:19:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:39.643 18:19:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:39.643 18:19:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:39.643 18:19:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:39.643 18:19:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.643 18:19:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.643 18:19:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.902 18:19:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjBjNzBkZGRiNWY4ZTkyOWJmNWU4NDA2NDgyM2Q3M2a16XEW: --dhchap-ctrl-secret DHHC-1:02:NzcyOTE3NjdkODIxNDkzZDZlMzgzODU5YjUxZTE4OGFkYjgzNTVjNGU1NzY3YzMwpTruLA==: 00:14:39.903 18:19:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjBjNzBkZGRiNWY4ZTkyOWJmNWU4NDA2NDgyM2Q3M2a16XEW: --dhchap-ctrl-secret DHHC-1:02:NzcyOTE3NjdkODIxNDkzZDZlMzgzODU5YjUxZTE4OGFkYjgzNTVjNGU1NzY3YzMwpTruLA==: 00:14:40.840 18:19:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.840 18:19:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:40.840 18:19:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.840 18:19:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.840 18:19:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.840 18:19:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:40.840 18:19:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:40.840 18:19:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:41.099 18:19:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:14:41.099 18:19:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:41.099 18:19:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:41.099 18:19:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:41.099 18:19:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:41.099 18:19:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.099 18:19:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.099 18:19:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.099 18:19:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.099 18:19:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.099 18:19:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.099 18:19:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.099 18:19:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.358 00:14:41.358 18:19:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:41.358 18:19:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:41.358 18:19:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.358 18:19:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.358 18:19:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.358 18:19:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.358 18:19:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.617 18:19:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.617 18:19:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:41.617 { 00:14:41.617 "cntlid": 29, 00:14:41.617 "qid": 0, 00:14:41.617 "state": "enabled", 00:14:41.617 "thread": "nvmf_tgt_poll_group_000", 00:14:41.617 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:41.617 "listen_address": { 00:14:41.617 "trtype": "RDMA", 00:14:41.617 "adrfam": "IPv4", 00:14:41.617 "traddr": "192.168.100.8", 00:14:41.617 "trsvcid": "4420" 00:14:41.617 }, 00:14:41.617 "peer_address": { 00:14:41.617 "trtype": "RDMA", 00:14:41.617 "adrfam": "IPv4", 00:14:41.617 "traddr": "192.168.100.8", 00:14:41.617 "trsvcid": "57230" 00:14:41.617 }, 00:14:41.617 "auth": { 00:14:41.617 "state": "completed", 00:14:41.617 "digest": "sha256", 00:14:41.617 "dhgroup": "ffdhe4096" 00:14:41.617 } 00:14:41.617 } 00:14:41.617 ]' 00:14:41.617 18:19:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:41.617 18:19:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:41.617 18:19:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:41.617 18:19:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:41.617 18:19:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:41.617 18:19:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:41.617 18:19:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:41.617 18:19:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.876 18:19:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: --dhchap-ctrl-secret DHHC-1:01:MmJlNjg1NjdjOWVjZTIxOTdjYjA1ZWFiNGQ4ZTU4NmOF4rQ8: 00:14:41.876 18:19:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: --dhchap-ctrl-secret DHHC-1:01:MmJlNjg1NjdjOWVjZTIxOTdjYjA1ZWFiNGQ4ZTU4NmOF4rQ8: 00:14:42.445 18:19:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.704 18:19:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:42.704 18:19:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.704 18:19:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.704 18:19:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.704 18:19:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:42.704 18:19:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:42.704 18:19:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:42.963 18:19:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:14:42.963 18:19:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:42.963 18:19:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:42.963 18:19:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:42.963 18:19:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:42.963 18:19:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:42.963 18:19:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key3 00:14:42.963 18:19:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.963 18:19:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.963 18:19:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.963 18:19:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:42.963 18:19:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:42.963 18:19:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:43.223 00:14:43.223 18:19:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:43.223 18:19:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:43.223 18:19:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.482 18:19:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.482 18:19:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.482 18:19:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.482 18:19:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.482 18:19:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.482 18:19:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:43.482 { 00:14:43.482 "cntlid": 31, 00:14:43.482 "qid": 0, 00:14:43.482 "state": "enabled", 00:14:43.482 "thread": "nvmf_tgt_poll_group_000", 00:14:43.482 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:43.482 "listen_address": { 00:14:43.482 "trtype": "RDMA", 00:14:43.482 "adrfam": "IPv4", 00:14:43.482 "traddr": "192.168.100.8", 00:14:43.482 "trsvcid": "4420" 00:14:43.482 }, 00:14:43.482 "peer_address": { 00:14:43.482 "trtype": "RDMA", 00:14:43.482 "adrfam": "IPv4", 00:14:43.482 "traddr": "192.168.100.8", 00:14:43.482 "trsvcid": "47882" 00:14:43.482 }, 00:14:43.482 "auth": { 00:14:43.482 "state": "completed", 00:14:43.482 "digest": "sha256", 00:14:43.482 "dhgroup": "ffdhe4096" 00:14:43.482 } 00:14:43.482 } 00:14:43.482 ]' 00:14:43.482 18:19:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:43.482 18:19:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:43.482 18:19:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:43.483 18:19:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:43.483 18:19:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:43.483 18:19:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:43.483 18:19:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:43.483 18:19:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:43.742 18:19:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmM4Zjk3NjUxM2UzMWFlZWNmZjBiYjc3YTAyOWZmMTQ5ZGE1MmU1NjYxNWIwMDlhNGJhNWJkNzIxNjZiMjE0ZfUxUC4=: 00:14:43.742 18:19:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmM4Zjk3NjUxM2UzMWFlZWNmZjBiYjc3YTAyOWZmMTQ5ZGE1MmU1NjYxNWIwMDlhNGJhNWJkNzIxNjZiMjE0ZfUxUC4=: 00:14:44.679 18:19:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:44.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:44.679 18:19:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:44.679 18:19:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.679 18:19:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.679 18:19:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.679 18:19:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:44.679 18:19:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:44.680 18:19:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:44.680 18:19:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:44.939 18:19:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:14:44.939 18:19:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:44.939 18:19:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:44.939 18:19:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:44.939 18:19:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:44.939 18:19:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:44.939 18:19:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:44.939 18:19:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.939 18:19:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.939 18:19:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.939 18:19:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:44.939 18:19:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:44.939 18:19:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.199 00:14:45.199 18:19:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:45.199 18:19:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:45.199 18:19:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.458 18:19:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.458 18:19:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.458 18:19:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.458 18:19:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.458 18:19:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.458 18:19:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:45.458 { 00:14:45.458 "cntlid": 33, 00:14:45.458 "qid": 0, 00:14:45.458 "state": "enabled", 00:14:45.458 "thread": "nvmf_tgt_poll_group_000", 00:14:45.458 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:45.458 "listen_address": { 00:14:45.458 "trtype": "RDMA", 00:14:45.458 "adrfam": "IPv4", 00:14:45.458 "traddr": "192.168.100.8", 00:14:45.458 "trsvcid": "4420" 00:14:45.458 }, 00:14:45.458 "peer_address": { 00:14:45.458 "trtype": "RDMA", 00:14:45.458 "adrfam": "IPv4", 00:14:45.458 "traddr": "192.168.100.8", 00:14:45.458 "trsvcid": "38372" 00:14:45.458 }, 00:14:45.458 "auth": { 00:14:45.458 "state": "completed", 00:14:45.458 "digest": "sha256", 00:14:45.458 "dhgroup": "ffdhe6144" 00:14:45.458 } 00:14:45.458 } 00:14:45.458 ]' 00:14:45.458 18:19:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:45.458 18:19:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:45.458 18:19:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:45.458 18:19:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:45.458 18:19:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:45.458 18:19:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.458 18:19:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.458 18:19:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.717 18:19:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjgwZDQ0ZDQ4OTZjYzRkODVkNTNiZmFlOTBlYzUwZGIzM2M2YWU5Yzk2ZmNhNTU4xFK5dw==: --dhchap-ctrl-secret DHHC-1:03:OTRiYjAzZDIwNmEwM2NmZDk0ZjQyMTg1ZTRlZjgyNjZjNWVlNTU1Y2ZiMTllNTJmOWUwMzczNDg4ZjBlODA0MwRnIXw=: 00:14:45.717 18:19:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjgwZDQ0ZDQ4OTZjYzRkODVkNTNiZmFlOTBlYzUwZGIzM2M2YWU5Yzk2ZmNhNTU4xFK5dw==: --dhchap-ctrl-secret DHHC-1:03:OTRiYjAzZDIwNmEwM2NmZDk0ZjQyMTg1ZTRlZjgyNjZjNWVlNTU1Y2ZiMTllNTJmOWUwMzczNDg4ZjBlODA0MwRnIXw=: 00:14:46.655 18:19:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.655 18:19:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:46.655 18:19:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.655 18:19:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.655 18:19:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.655 18:19:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:46.655 18:19:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:46.655 18:19:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:46.914 18:19:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:14:46.914 18:19:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:46.914 18:19:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:46.914 18:19:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:46.914 18:19:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:46.914 18:19:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.915 18:19:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.915 18:19:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.915 18:19:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.915 18:19:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.915 18:19:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.915 18:19:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.915 18:19:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.174 00:14:47.174 18:20:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:47.174 18:20:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:47.174 18:20:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.433 18:20:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.433 18:20:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.433 18:20:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.433 18:20:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.433 18:20:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.433 18:20:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:47.433 { 00:14:47.433 "cntlid": 35, 00:14:47.433 "qid": 0, 00:14:47.433 "state": "enabled", 00:14:47.433 "thread": "nvmf_tgt_poll_group_000", 00:14:47.433 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:47.433 "listen_address": { 00:14:47.433 "trtype": "RDMA", 00:14:47.433 "adrfam": "IPv4", 00:14:47.434 "traddr": "192.168.100.8", 00:14:47.434 "trsvcid": "4420" 00:14:47.434 }, 00:14:47.434 "peer_address": { 00:14:47.434 "trtype": "RDMA", 00:14:47.434 "adrfam": "IPv4", 00:14:47.434 "traddr": "192.168.100.8", 00:14:47.434 "trsvcid": "34703" 00:14:47.434 }, 00:14:47.434 "auth": { 00:14:47.434 "state": "completed", 00:14:47.434 "digest": "sha256", 00:14:47.434 "dhgroup": "ffdhe6144" 00:14:47.434 } 00:14:47.434 } 00:14:47.434 ]' 00:14:47.434 18:20:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:47.434 18:20:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:47.434 18:20:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:47.434 18:20:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:47.434 18:20:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:47.693 18:20:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.693 18:20:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.693 18:20:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.693 18:20:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjBjNzBkZGRiNWY4ZTkyOWJmNWU4NDA2NDgyM2Q3M2a16XEW: --dhchap-ctrl-secret DHHC-1:02:NzcyOTE3NjdkODIxNDkzZDZlMzgzODU5YjUxZTE4OGFkYjgzNTVjNGU1NzY3YzMwpTruLA==: 00:14:47.693 18:20:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjBjNzBkZGRiNWY4ZTkyOWJmNWU4NDA2NDgyM2Q3M2a16XEW: --dhchap-ctrl-secret DHHC-1:02:NzcyOTE3NjdkODIxNDkzZDZlMzgzODU5YjUxZTE4OGFkYjgzNTVjNGU1NzY3YzMwpTruLA==: 00:14:48.630 18:20:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.630 18:20:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:48.630 18:20:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.630 18:20:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.630 18:20:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.630 18:20:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:48.630 18:20:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:48.630 18:20:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:48.890 18:20:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:14:48.890 18:20:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:48.890 18:20:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:48.890 18:20:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:48.890 18:20:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:48.890 18:20:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.890 18:20:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.890 18:20:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.890 18:20:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.890 18:20:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.890 18:20:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.890 18:20:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.890 18:20:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.148 00:14:49.148 18:20:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:49.148 18:20:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:49.148 18:20:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:49.407 18:20:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:49.407 18:20:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:49.407 18:20:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.407 18:20:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.407 18:20:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.407 18:20:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:49.407 { 00:14:49.407 "cntlid": 37, 00:14:49.407 "qid": 0, 00:14:49.407 "state": "enabled", 00:14:49.407 "thread": "nvmf_tgt_poll_group_000", 00:14:49.407 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:49.407 "listen_address": { 00:14:49.407 "trtype": "RDMA", 00:14:49.407 "adrfam": "IPv4", 00:14:49.407 "traddr": "192.168.100.8", 00:14:49.407 "trsvcid": "4420" 00:14:49.407 }, 00:14:49.407 "peer_address": { 00:14:49.407 "trtype": "RDMA", 00:14:49.407 "adrfam": "IPv4", 00:14:49.407 "traddr": "192.168.100.8", 00:14:49.407 "trsvcid": "57914" 00:14:49.407 }, 00:14:49.407 "auth": { 00:14:49.407 "state": "completed", 00:14:49.407 "digest": "sha256", 00:14:49.407 "dhgroup": "ffdhe6144" 00:14:49.407 } 00:14:49.407 } 00:14:49.407 ]' 00:14:49.407 18:20:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:49.407 18:20:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:49.407 18:20:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:49.407 18:20:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:49.407 18:20:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:49.666 18:20:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.666 18:20:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.666 18:20:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.666 18:20:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: --dhchap-ctrl-secret DHHC-1:01:MmJlNjg1NjdjOWVjZTIxOTdjYjA1ZWFiNGQ4ZTU4NmOF4rQ8: 00:14:49.666 18:20:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: --dhchap-ctrl-secret DHHC-1:01:MmJlNjg1NjdjOWVjZTIxOTdjYjA1ZWFiNGQ4ZTU4NmOF4rQ8: 00:14:50.603 18:20:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:50.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:50.603 18:20:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:50.603 18:20:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.603 18:20:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.603 18:20:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.603 18:20:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:50.603 18:20:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:50.603 18:20:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:50.862 18:20:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:14:50.862 18:20:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:50.862 18:20:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:50.862 18:20:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:50.862 18:20:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:50.862 18:20:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.862 18:20:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key3 00:14:50.862 18:20:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.862 18:20:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.862 18:20:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.862 18:20:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:50.862 18:20:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:50.862 18:20:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:51.121 00:14:51.121 18:20:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:51.121 18:20:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:51.121 18:20:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.379 18:20:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.379 18:20:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.379 18:20:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.379 18:20:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.379 18:20:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.379 18:20:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:51.379 { 00:14:51.379 "cntlid": 39, 00:14:51.379 "qid": 0, 00:14:51.379 "state": "enabled", 00:14:51.379 "thread": "nvmf_tgt_poll_group_000", 00:14:51.379 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:51.379 "listen_address": { 00:14:51.379 "trtype": "RDMA", 00:14:51.379 "adrfam": "IPv4", 00:14:51.379 "traddr": "192.168.100.8", 00:14:51.379 "trsvcid": "4420" 00:14:51.379 }, 00:14:51.379 "peer_address": { 00:14:51.379 "trtype": "RDMA", 00:14:51.379 "adrfam": "IPv4", 00:14:51.379 "traddr": "192.168.100.8", 00:14:51.379 "trsvcid": "35492" 00:14:51.379 }, 00:14:51.379 "auth": { 00:14:51.379 "state": "completed", 00:14:51.379 "digest": "sha256", 00:14:51.379 "dhgroup": "ffdhe6144" 00:14:51.379 } 00:14:51.379 } 00:14:51.379 ]' 00:14:51.379 18:20:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:51.380 18:20:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:51.380 18:20:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:51.380 18:20:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:51.380 18:20:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:51.639 18:20:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.639 18:20:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.639 18:20:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.639 18:20:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmM4Zjk3NjUxM2UzMWFlZWNmZjBiYjc3YTAyOWZmMTQ5ZGE1MmU1NjYxNWIwMDlhNGJhNWJkNzIxNjZiMjE0ZfUxUC4=: 00:14:51.639 18:20:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmM4Zjk3NjUxM2UzMWFlZWNmZjBiYjc3YTAyOWZmMTQ5ZGE1MmU1NjYxNWIwMDlhNGJhNWJkNzIxNjZiMjE0ZfUxUC4=: 00:14:52.576 18:20:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.576 18:20:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:52.576 18:20:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.576 18:20:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.576 18:20:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.576 18:20:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:52.576 18:20:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:52.576 18:20:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:52.576 18:20:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:52.835 18:20:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:14:52.835 18:20:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:52.835 18:20:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:52.835 18:20:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:52.835 18:20:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:52.835 18:20:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.835 18:20:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.835 18:20:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.835 18:20:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.835 18:20:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.835 18:20:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.835 18:20:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.835 18:20:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:53.414 00:14:53.414 18:20:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:53.414 18:20:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:53.414 18:20:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.414 18:20:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.414 18:20:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.414 18:20:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.414 18:20:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.414 18:20:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.414 18:20:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:53.414 { 00:14:53.414 "cntlid": 41, 00:14:53.414 "qid": 0, 00:14:53.414 "state": "enabled", 00:14:53.414 "thread": "nvmf_tgt_poll_group_000", 00:14:53.414 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:53.414 "listen_address": { 00:14:53.414 "trtype": "RDMA", 00:14:53.414 "adrfam": "IPv4", 00:14:53.414 "traddr": "192.168.100.8", 00:14:53.414 "trsvcid": "4420" 00:14:53.414 }, 00:14:53.414 "peer_address": { 00:14:53.414 "trtype": "RDMA", 00:14:53.414 "adrfam": "IPv4", 00:14:53.414 "traddr": "192.168.100.8", 00:14:53.414 "trsvcid": "39029" 00:14:53.414 }, 00:14:53.414 "auth": { 00:14:53.414 "state": "completed", 00:14:53.414 "digest": "sha256", 00:14:53.414 "dhgroup": "ffdhe8192" 00:14:53.414 } 00:14:53.414 } 00:14:53.414 ]' 00:14:53.414 18:20:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:53.673 18:20:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:53.673 18:20:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:53.673 18:20:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:53.673 18:20:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:53.673 18:20:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.673 18:20:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.673 18:20:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.932 18:20:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjgwZDQ0ZDQ4OTZjYzRkODVkNTNiZmFlOTBlYzUwZGIzM2M2YWU5Yzk2ZmNhNTU4xFK5dw==: --dhchap-ctrl-secret DHHC-1:03:OTRiYjAzZDIwNmEwM2NmZDk0ZjQyMTg1ZTRlZjgyNjZjNWVlNTU1Y2ZiMTllNTJmOWUwMzczNDg4ZjBlODA0MwRnIXw=: 00:14:53.932 18:20:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjgwZDQ0ZDQ4OTZjYzRkODVkNTNiZmFlOTBlYzUwZGIzM2M2YWU5Yzk2ZmNhNTU4xFK5dw==: --dhchap-ctrl-secret DHHC-1:03:OTRiYjAzZDIwNmEwM2NmZDk0ZjQyMTg1ZTRlZjgyNjZjNWVlNTU1Y2ZiMTllNTJmOWUwMzczNDg4ZjBlODA0MwRnIXw=: 00:14:54.499 18:20:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.758 18:20:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:54.758 18:20:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.758 18:20:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.758 18:20:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.758 18:20:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:54.758 18:20:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:54.758 18:20:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:55.017 18:20:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:14:55.017 18:20:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:55.017 18:20:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:55.017 18:20:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:55.017 18:20:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:55.017 18:20:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.017 18:20:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.017 18:20:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.017 18:20:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.017 18:20:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.017 18:20:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.017 18:20:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.017 18:20:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.585 00:14:55.585 18:20:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:55.585 18:20:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.585 18:20:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:55.585 18:20:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.585 18:20:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.585 18:20:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.585 18:20:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.585 18:20:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.585 18:20:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:55.585 { 00:14:55.585 "cntlid": 43, 00:14:55.585 "qid": 0, 00:14:55.585 "state": "enabled", 00:14:55.585 "thread": "nvmf_tgt_poll_group_000", 00:14:55.585 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:55.585 "listen_address": { 00:14:55.585 "trtype": "RDMA", 00:14:55.585 "adrfam": "IPv4", 00:14:55.585 "traddr": "192.168.100.8", 00:14:55.585 "trsvcid": "4420" 00:14:55.585 }, 00:14:55.585 "peer_address": { 00:14:55.585 "trtype": "RDMA", 00:14:55.585 "adrfam": "IPv4", 00:14:55.585 "traddr": "192.168.100.8", 00:14:55.585 "trsvcid": "33694" 00:14:55.585 }, 00:14:55.585 "auth": { 00:14:55.585 "state": "completed", 00:14:55.585 "digest": "sha256", 00:14:55.585 "dhgroup": "ffdhe8192" 00:14:55.585 } 00:14:55.585 } 00:14:55.585 ]' 00:14:55.585 18:20:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:55.844 18:20:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:55.844 18:20:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:55.844 18:20:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:55.844 18:20:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:55.844 18:20:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.844 18:20:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.844 18:20:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.103 18:20:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjBjNzBkZGRiNWY4ZTkyOWJmNWU4NDA2NDgyM2Q3M2a16XEW: --dhchap-ctrl-secret DHHC-1:02:NzcyOTE3NjdkODIxNDkzZDZlMzgzODU5YjUxZTE4OGFkYjgzNTVjNGU1NzY3YzMwpTruLA==: 00:14:56.103 18:20:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjBjNzBkZGRiNWY4ZTkyOWJmNWU4NDA2NDgyM2Q3M2a16XEW: --dhchap-ctrl-secret DHHC-1:02:NzcyOTE3NjdkODIxNDkzZDZlMzgzODU5YjUxZTE4OGFkYjgzNTVjNGU1NzY3YzMwpTruLA==: 00:14:56.674 18:20:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.933 18:20:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:56.933 18:20:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.933 18:20:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.933 18:20:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.933 18:20:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:56.933 18:20:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:56.933 18:20:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:56.933 18:20:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:14:56.933 18:20:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:56.933 18:20:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:56.933 18:20:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:56.933 18:20:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:56.933 18:20:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.933 18:20:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.933 18:20:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.933 18:20:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.192 18:20:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.192 18:20:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.192 18:20:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.192 18:20:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.451 00:14:57.709 18:20:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:57.709 18:20:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:57.709 18:20:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.709 18:20:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.709 18:20:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.709 18:20:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.709 18:20:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.709 18:20:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.709 18:20:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:57.709 { 00:14:57.709 "cntlid": 45, 00:14:57.709 "qid": 0, 00:14:57.709 "state": "enabled", 00:14:57.709 "thread": "nvmf_tgt_poll_group_000", 00:14:57.709 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:57.709 "listen_address": { 00:14:57.709 "trtype": "RDMA", 00:14:57.709 "adrfam": "IPv4", 00:14:57.709 "traddr": "192.168.100.8", 00:14:57.709 "trsvcid": "4420" 00:14:57.709 }, 00:14:57.709 "peer_address": { 00:14:57.709 "trtype": "RDMA", 00:14:57.709 "adrfam": "IPv4", 00:14:57.709 "traddr": "192.168.100.8", 00:14:57.709 "trsvcid": "44442" 00:14:57.709 }, 00:14:57.709 "auth": { 00:14:57.709 "state": "completed", 00:14:57.709 "digest": "sha256", 00:14:57.709 "dhgroup": "ffdhe8192" 00:14:57.709 } 00:14:57.709 } 00:14:57.709 ]' 00:14:57.709 18:20:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:57.967 18:20:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:57.967 18:20:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:57.967 18:20:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:57.967 18:20:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:57.967 18:20:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.967 18:20:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.967 18:20:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.225 18:20:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: --dhchap-ctrl-secret DHHC-1:01:MmJlNjg1NjdjOWVjZTIxOTdjYjA1ZWFiNGQ4ZTU4NmOF4rQ8: 00:14:58.225 18:20:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: --dhchap-ctrl-secret DHHC-1:01:MmJlNjg1NjdjOWVjZTIxOTdjYjA1ZWFiNGQ4ZTU4NmOF4rQ8: 00:14:58.791 18:20:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.050 18:20:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:59.050 18:20:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.050 18:20:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.050 18:20:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.050 18:20:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:59.050 18:20:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:59.050 18:20:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:59.308 18:20:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:14:59.308 18:20:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:59.308 18:20:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:59.308 18:20:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:59.308 18:20:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:59.308 18:20:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.308 18:20:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key3 00:14:59.308 18:20:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.308 18:20:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.308 18:20:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.308 18:20:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:59.309 18:20:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:59.309 18:20:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:59.877 00:14:59.877 18:20:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:59.877 18:20:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:59.877 18:20:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.877 18:20:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.877 18:20:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.877 18:20:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.877 18:20:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.877 18:20:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.877 18:20:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:59.877 { 00:14:59.877 "cntlid": 47, 00:14:59.877 "qid": 0, 00:14:59.877 "state": "enabled", 00:14:59.877 "thread": "nvmf_tgt_poll_group_000", 00:14:59.877 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:59.877 "listen_address": { 00:14:59.877 "trtype": "RDMA", 00:14:59.877 "adrfam": "IPv4", 00:14:59.877 "traddr": "192.168.100.8", 00:14:59.877 "trsvcid": "4420" 00:14:59.877 }, 00:14:59.877 "peer_address": { 00:14:59.877 "trtype": "RDMA", 00:14:59.877 "adrfam": "IPv4", 00:14:59.877 "traddr": "192.168.100.8", 00:14:59.877 "trsvcid": "57989" 00:14:59.877 }, 00:14:59.877 "auth": { 00:14:59.877 "state": "completed", 00:14:59.877 "digest": "sha256", 00:14:59.877 "dhgroup": "ffdhe8192" 00:14:59.877 } 00:14:59.877 } 00:14:59.877 ]' 00:14:59.877 18:20:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:00.136 18:20:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:00.136 18:20:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:00.136 18:20:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:00.136 18:20:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:00.136 18:20:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.136 18:20:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.136 18:20:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.395 18:20:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmM4Zjk3NjUxM2UzMWFlZWNmZjBiYjc3YTAyOWZmMTQ5ZGE1MmU1NjYxNWIwMDlhNGJhNWJkNzIxNjZiMjE0ZfUxUC4=: 00:15:00.396 18:20:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmM4Zjk3NjUxM2UzMWFlZWNmZjBiYjc3YTAyOWZmMTQ5ZGE1MmU1NjYxNWIwMDlhNGJhNWJkNzIxNjZiMjE0ZfUxUC4=: 00:15:00.965 18:20:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.241 18:20:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:01.241 18:20:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.241 18:20:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.241 18:20:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.241 18:20:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:01.241 18:20:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:01.241 18:20:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:01.241 18:20:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:01.241 18:20:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:01.241 18:20:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:01.241 18:20:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:01.241 18:20:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:01.241 18:20:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:01.241 18:20:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:01.241 18:20:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.241 18:20:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.241 18:20:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.241 18:20:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.585 18:20:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.585 18:20:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.585 18:20:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.585 18:20:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.585 00:15:01.585 18:20:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:01.585 18:20:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:01.585 18:20:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.845 18:20:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.845 18:20:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.845 18:20:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.845 18:20:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.845 18:20:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.845 18:20:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:01.845 { 00:15:01.845 "cntlid": 49, 00:15:01.845 "qid": 0, 00:15:01.845 "state": "enabled", 00:15:01.845 "thread": "nvmf_tgt_poll_group_000", 00:15:01.845 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:01.845 "listen_address": { 00:15:01.845 "trtype": "RDMA", 00:15:01.845 "adrfam": "IPv4", 00:15:01.845 "traddr": "192.168.100.8", 00:15:01.845 "trsvcid": "4420" 00:15:01.845 }, 00:15:01.845 "peer_address": { 00:15:01.845 "trtype": "RDMA", 00:15:01.845 "adrfam": "IPv4", 00:15:01.845 "traddr": "192.168.100.8", 00:15:01.845 "trsvcid": "54732" 00:15:01.845 }, 00:15:01.845 "auth": { 00:15:01.845 "state": "completed", 00:15:01.845 "digest": "sha384", 00:15:01.845 "dhgroup": "null" 00:15:01.845 } 00:15:01.845 } 00:15:01.845 ]' 00:15:01.845 18:20:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:01.845 18:20:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:01.845 18:20:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:01.845 18:20:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:01.845 18:20:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:02.106 18:20:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.106 18:20:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.106 18:20:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.106 18:20:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjgwZDQ0ZDQ4OTZjYzRkODVkNTNiZmFlOTBlYzUwZGIzM2M2YWU5Yzk2ZmNhNTU4xFK5dw==: --dhchap-ctrl-secret DHHC-1:03:OTRiYjAzZDIwNmEwM2NmZDk0ZjQyMTg1ZTRlZjgyNjZjNWVlNTU1Y2ZiMTllNTJmOWUwMzczNDg4ZjBlODA0MwRnIXw=: 00:15:02.106 18:20:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjgwZDQ0ZDQ4OTZjYzRkODVkNTNiZmFlOTBlYzUwZGIzM2M2YWU5Yzk2ZmNhNTU4xFK5dw==: --dhchap-ctrl-secret DHHC-1:03:OTRiYjAzZDIwNmEwM2NmZDk0ZjQyMTg1ZTRlZjgyNjZjNWVlNTU1Y2ZiMTllNTJmOWUwMzczNDg4ZjBlODA0MwRnIXw=: 00:15:03.045 18:20:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.045 18:20:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:03.045 18:20:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.045 18:20:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.045 18:20:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.045 18:20:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:03.045 18:20:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:03.045 18:20:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:03.305 18:20:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:03.305 18:20:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:03.305 18:20:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:03.305 18:20:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:03.305 18:20:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:03.305 18:20:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.305 18:20:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.305 18:20:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.305 18:20:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.305 18:20:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.305 18:20:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.305 18:20:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.305 18:20:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.565 00:15:03.565 18:20:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:03.565 18:20:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:03.565 18:20:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.825 18:20:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.826 18:20:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.826 18:20:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.826 18:20:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.826 18:20:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.826 18:20:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:03.826 { 00:15:03.826 "cntlid": 51, 00:15:03.826 "qid": 0, 00:15:03.826 "state": "enabled", 00:15:03.826 "thread": "nvmf_tgt_poll_group_000", 00:15:03.826 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:03.826 "listen_address": { 00:15:03.826 "trtype": "RDMA", 00:15:03.826 "adrfam": "IPv4", 00:15:03.826 "traddr": "192.168.100.8", 00:15:03.826 "trsvcid": "4420" 00:15:03.826 }, 00:15:03.826 "peer_address": { 00:15:03.826 "trtype": "RDMA", 00:15:03.826 "adrfam": "IPv4", 00:15:03.826 "traddr": "192.168.100.8", 00:15:03.826 "trsvcid": "39099" 00:15:03.826 }, 00:15:03.826 "auth": { 00:15:03.826 "state": "completed", 00:15:03.826 "digest": "sha384", 00:15:03.826 "dhgroup": "null" 00:15:03.826 } 00:15:03.826 } 00:15:03.826 ]' 00:15:03.826 18:20:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:03.826 18:20:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:03.826 18:20:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:03.826 18:20:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:03.826 18:20:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:03.826 18:20:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.826 18:20:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.826 18:20:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.085 18:20:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjBjNzBkZGRiNWY4ZTkyOWJmNWU4NDA2NDgyM2Q3M2a16XEW: --dhchap-ctrl-secret DHHC-1:02:NzcyOTE3NjdkODIxNDkzZDZlMzgzODU5YjUxZTE4OGFkYjgzNTVjNGU1NzY3YzMwpTruLA==: 00:15:04.085 18:20:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjBjNzBkZGRiNWY4ZTkyOWJmNWU4NDA2NDgyM2Q3M2a16XEW: --dhchap-ctrl-secret DHHC-1:02:NzcyOTE3NjdkODIxNDkzZDZlMzgzODU5YjUxZTE4OGFkYjgzNTVjNGU1NzY3YzMwpTruLA==: 00:15:05.024 18:20:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.024 18:20:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:05.024 18:20:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.024 18:20:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.024 18:20:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.024 18:20:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:05.024 18:20:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:05.024 18:20:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:05.024 18:20:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:15:05.024 18:20:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:05.024 18:20:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:05.024 18:20:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:05.024 18:20:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:05.024 18:20:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.024 18:20:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.024 18:20:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.024 18:20:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.283 18:20:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.283 18:20:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.283 18:20:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.283 18:20:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.283 00:15:05.543 18:20:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:05.543 18:20:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.543 18:20:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:05.543 18:20:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.543 18:20:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.543 18:20:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.543 18:20:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.543 18:20:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.543 18:20:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:05.543 { 00:15:05.543 "cntlid": 53, 00:15:05.543 "qid": 0, 00:15:05.543 "state": "enabled", 00:15:05.543 "thread": "nvmf_tgt_poll_group_000", 00:15:05.543 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:05.543 "listen_address": { 00:15:05.543 "trtype": "RDMA", 00:15:05.543 "adrfam": "IPv4", 00:15:05.543 "traddr": "192.168.100.8", 00:15:05.543 "trsvcid": "4420" 00:15:05.543 }, 00:15:05.543 "peer_address": { 00:15:05.543 "trtype": "RDMA", 00:15:05.543 "adrfam": "IPv4", 00:15:05.543 "traddr": "192.168.100.8", 00:15:05.543 "trsvcid": "55679" 00:15:05.543 }, 00:15:05.543 "auth": { 00:15:05.543 "state": "completed", 00:15:05.543 "digest": "sha384", 00:15:05.543 "dhgroup": "null" 00:15:05.543 } 00:15:05.543 } 00:15:05.543 ]' 00:15:05.543 18:20:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:05.803 18:20:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:05.803 18:20:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:05.803 18:20:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:05.803 18:20:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:05.803 18:20:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.803 18:20:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.803 18:20:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.062 18:20:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: --dhchap-ctrl-secret DHHC-1:01:MmJlNjg1NjdjOWVjZTIxOTdjYjA1ZWFiNGQ4ZTU4NmOF4rQ8: 00:15:06.062 18:20:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: --dhchap-ctrl-secret DHHC-1:01:MmJlNjg1NjdjOWVjZTIxOTdjYjA1ZWFiNGQ4ZTU4NmOF4rQ8: 00:15:06.631 18:20:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.891 18:20:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:06.891 18:20:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.891 18:20:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.891 18:20:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.891 18:20:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:06.891 18:20:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:06.891 18:20:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:07.150 18:20:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:07.150 18:20:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:07.150 18:20:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:07.150 18:20:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:07.150 18:20:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:07.150 18:20:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.150 18:20:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key3 00:15:07.150 18:20:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.150 18:20:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.150 18:20:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.150 18:20:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:07.150 18:20:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:07.150 18:20:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:07.409 00:15:07.409 18:20:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:07.409 18:20:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:07.409 18:20:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.409 18:20:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.409 18:20:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.410 18:20:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.410 18:20:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.410 18:20:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.410 18:20:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:07.410 { 00:15:07.410 "cntlid": 55, 00:15:07.410 "qid": 0, 00:15:07.410 "state": "enabled", 00:15:07.410 "thread": "nvmf_tgt_poll_group_000", 00:15:07.410 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:07.410 "listen_address": { 00:15:07.410 "trtype": "RDMA", 00:15:07.410 "adrfam": "IPv4", 00:15:07.410 "traddr": "192.168.100.8", 00:15:07.410 "trsvcid": "4420" 00:15:07.410 }, 00:15:07.410 "peer_address": { 00:15:07.410 "trtype": "RDMA", 00:15:07.410 "adrfam": "IPv4", 00:15:07.410 "traddr": "192.168.100.8", 00:15:07.410 "trsvcid": "51429" 00:15:07.410 }, 00:15:07.410 "auth": { 00:15:07.410 "state": "completed", 00:15:07.410 "digest": "sha384", 00:15:07.410 "dhgroup": "null" 00:15:07.410 } 00:15:07.410 } 00:15:07.410 ]' 00:15:07.410 18:20:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:07.669 18:20:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:07.669 18:20:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:07.669 18:20:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:07.669 18:20:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:07.669 18:20:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.669 18:20:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.669 18:20:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.929 18:20:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmM4Zjk3NjUxM2UzMWFlZWNmZjBiYjc3YTAyOWZmMTQ5ZGE1MmU1NjYxNWIwMDlhNGJhNWJkNzIxNjZiMjE0ZfUxUC4=: 00:15:07.929 18:20:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmM4Zjk3NjUxM2UzMWFlZWNmZjBiYjc3YTAyOWZmMTQ5ZGE1MmU1NjYxNWIwMDlhNGJhNWJkNzIxNjZiMjE0ZfUxUC4=: 00:15:08.497 18:20:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.757 18:20:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:08.757 18:20:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.757 18:20:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.757 18:20:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.757 18:20:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:08.757 18:20:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:08.757 18:20:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:08.757 18:20:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:09.016 18:20:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:15:09.016 18:20:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:09.016 18:20:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:09.016 18:20:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:09.016 18:20:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:09.016 18:20:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.016 18:20:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.016 18:20:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.016 18:20:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.016 18:20:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.016 18:20:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.016 18:20:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.016 18:20:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.276 00:15:09.276 18:20:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:09.276 18:20:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.276 18:20:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:09.536 18:20:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.536 18:20:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.536 18:20:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.536 18:20:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.536 18:20:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.536 18:20:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:09.536 { 00:15:09.536 "cntlid": 57, 00:15:09.536 "qid": 0, 00:15:09.536 "state": "enabled", 00:15:09.536 "thread": "nvmf_tgt_poll_group_000", 00:15:09.536 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:09.536 "listen_address": { 00:15:09.536 "trtype": "RDMA", 00:15:09.536 "adrfam": "IPv4", 00:15:09.536 "traddr": "192.168.100.8", 00:15:09.536 "trsvcid": "4420" 00:15:09.536 }, 00:15:09.536 "peer_address": { 00:15:09.536 "trtype": "RDMA", 00:15:09.536 "adrfam": "IPv4", 00:15:09.536 "traddr": "192.168.100.8", 00:15:09.536 "trsvcid": "52794" 00:15:09.536 }, 00:15:09.536 "auth": { 00:15:09.536 "state": "completed", 00:15:09.536 "digest": "sha384", 00:15:09.536 "dhgroup": "ffdhe2048" 00:15:09.536 } 00:15:09.536 } 00:15:09.536 ]' 00:15:09.536 18:20:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:09.536 18:20:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:09.536 18:20:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:09.536 18:20:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:09.536 18:20:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:09.536 18:20:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.536 18:20:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.536 18:20:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.795 18:20:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjgwZDQ0ZDQ4OTZjYzRkODVkNTNiZmFlOTBlYzUwZGIzM2M2YWU5Yzk2ZmNhNTU4xFK5dw==: --dhchap-ctrl-secret DHHC-1:03:OTRiYjAzZDIwNmEwM2NmZDk0ZjQyMTg1ZTRlZjgyNjZjNWVlNTU1Y2ZiMTllNTJmOWUwMzczNDg4ZjBlODA0MwRnIXw=: 00:15:09.795 18:20:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjgwZDQ0ZDQ4OTZjYzRkODVkNTNiZmFlOTBlYzUwZGIzM2M2YWU5Yzk2ZmNhNTU4xFK5dw==: --dhchap-ctrl-secret DHHC-1:03:OTRiYjAzZDIwNmEwM2NmZDk0ZjQyMTg1ZTRlZjgyNjZjNWVlNTU1Y2ZiMTllNTJmOWUwMzczNDg4ZjBlODA0MwRnIXw=: 00:15:10.733 18:20:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.733 18:20:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:10.733 18:20:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.733 18:20:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.733 18:20:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.733 18:20:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.733 18:20:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:10.733 18:20:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:10.733 18:20:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:10.733 18:20:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:10.733 18:20:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:10.733 18:20:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:10.733 18:20:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:10.733 18:20:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.733 18:20:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.733 18:20:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.733 18:20:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.733 18:20:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.733 18:20:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.733 18:20:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.733 18:20:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.992 00:15:10.993 18:20:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:10.993 18:20:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:10.993 18:20:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.252 18:20:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.252 18:20:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.252 18:20:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.252 18:20:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.252 18:20:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.252 18:20:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:11.252 { 00:15:11.252 "cntlid": 59, 00:15:11.252 "qid": 0, 00:15:11.252 "state": "enabled", 00:15:11.252 "thread": "nvmf_tgt_poll_group_000", 00:15:11.252 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:11.252 "listen_address": { 00:15:11.252 "trtype": "RDMA", 00:15:11.252 "adrfam": "IPv4", 00:15:11.252 "traddr": "192.168.100.8", 00:15:11.252 "trsvcid": "4420" 00:15:11.252 }, 00:15:11.252 "peer_address": { 00:15:11.252 "trtype": "RDMA", 00:15:11.252 "adrfam": "IPv4", 00:15:11.252 "traddr": "192.168.100.8", 00:15:11.252 "trsvcid": "38437" 00:15:11.252 }, 00:15:11.252 "auth": { 00:15:11.252 "state": "completed", 00:15:11.252 "digest": "sha384", 00:15:11.252 "dhgroup": "ffdhe2048" 00:15:11.252 } 00:15:11.252 } 00:15:11.252 ]' 00:15:11.252 18:20:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:11.252 18:20:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:11.252 18:20:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:11.512 18:20:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:11.512 18:20:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:11.512 18:20:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.512 18:20:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.512 18:20:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.772 18:20:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjBjNzBkZGRiNWY4ZTkyOWJmNWU4NDA2NDgyM2Q3M2a16XEW: --dhchap-ctrl-secret DHHC-1:02:NzcyOTE3NjdkODIxNDkzZDZlMzgzODU5YjUxZTE4OGFkYjgzNTVjNGU1NzY3YzMwpTruLA==: 00:15:11.772 18:20:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjBjNzBkZGRiNWY4ZTkyOWJmNWU4NDA2NDgyM2Q3M2a16XEW: --dhchap-ctrl-secret DHHC-1:02:NzcyOTE3NjdkODIxNDkzZDZlMzgzODU5YjUxZTE4OGFkYjgzNTVjNGU1NzY3YzMwpTruLA==: 00:15:12.341 18:20:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.601 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.601 18:20:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:12.601 18:20:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.601 18:20:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.601 18:20:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.601 18:20:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:12.601 18:20:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:12.601 18:20:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:12.861 18:20:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:15:12.861 18:20:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:12.861 18:20:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:12.861 18:20:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:12.861 18:20:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:12.861 18:20:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.861 18:20:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.861 18:20:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.861 18:20:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.861 18:20:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.861 18:20:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.861 18:20:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.861 18:20:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.121 00:15:13.121 18:20:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:13.121 18:20:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.121 18:20:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:13.121 18:20:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.121 18:20:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.121 18:20:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.121 18:20:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.121 18:20:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.121 18:20:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:13.121 { 00:15:13.121 "cntlid": 61, 00:15:13.121 "qid": 0, 00:15:13.121 "state": "enabled", 00:15:13.121 "thread": "nvmf_tgt_poll_group_000", 00:15:13.121 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:13.121 "listen_address": { 00:15:13.121 "trtype": "RDMA", 00:15:13.121 "adrfam": "IPv4", 00:15:13.121 "traddr": "192.168.100.8", 00:15:13.121 "trsvcid": "4420" 00:15:13.121 }, 00:15:13.121 "peer_address": { 00:15:13.121 "trtype": "RDMA", 00:15:13.121 "adrfam": "IPv4", 00:15:13.121 "traddr": "192.168.100.8", 00:15:13.121 "trsvcid": "41904" 00:15:13.121 }, 00:15:13.121 "auth": { 00:15:13.121 "state": "completed", 00:15:13.121 "digest": "sha384", 00:15:13.121 "dhgroup": "ffdhe2048" 00:15:13.121 } 00:15:13.121 } 00:15:13.121 ]' 00:15:13.121 18:20:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:13.380 18:20:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:13.380 18:20:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:13.380 18:20:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:13.380 18:20:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:13.380 18:20:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.380 18:20:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.380 18:20:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.639 18:20:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: --dhchap-ctrl-secret DHHC-1:01:MmJlNjg1NjdjOWVjZTIxOTdjYjA1ZWFiNGQ4ZTU4NmOF4rQ8: 00:15:13.639 18:20:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: --dhchap-ctrl-secret DHHC-1:01:MmJlNjg1NjdjOWVjZTIxOTdjYjA1ZWFiNGQ4ZTU4NmOF4rQ8: 00:15:14.208 18:20:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.467 18:20:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:14.467 18:20:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.467 18:20:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.467 18:20:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.467 18:20:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:14.467 18:20:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:14.467 18:20:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:14.726 18:20:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:15:14.726 18:20:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:14.726 18:20:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:14.726 18:20:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:14.726 18:20:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:14.726 18:20:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.726 18:20:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key3 00:15:14.726 18:20:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.726 18:20:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.726 18:20:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.726 18:20:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:14.726 18:20:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:14.726 18:20:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:14.985 00:15:14.985 18:20:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:14.985 18:20:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:14.985 18:20:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.245 18:20:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.245 18:20:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.245 18:20:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.245 18:20:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.245 18:20:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.245 18:20:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:15.245 { 00:15:15.245 "cntlid": 63, 00:15:15.245 "qid": 0, 00:15:15.245 "state": "enabled", 00:15:15.245 "thread": "nvmf_tgt_poll_group_000", 00:15:15.245 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:15.245 "listen_address": { 00:15:15.245 "trtype": "RDMA", 00:15:15.245 "adrfam": "IPv4", 00:15:15.245 "traddr": "192.168.100.8", 00:15:15.245 "trsvcid": "4420" 00:15:15.245 }, 00:15:15.245 "peer_address": { 00:15:15.245 "trtype": "RDMA", 00:15:15.245 "adrfam": "IPv4", 00:15:15.245 "traddr": "192.168.100.8", 00:15:15.245 "trsvcid": "45004" 00:15:15.245 }, 00:15:15.245 "auth": { 00:15:15.245 "state": "completed", 00:15:15.245 "digest": "sha384", 00:15:15.245 "dhgroup": "ffdhe2048" 00:15:15.245 } 00:15:15.245 } 00:15:15.245 ]' 00:15:15.245 18:20:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:15.245 18:20:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:15.245 18:20:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:15.245 18:20:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:15.245 18:20:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:15.245 18:20:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.245 18:20:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.245 18:20:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.504 18:20:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmM4Zjk3NjUxM2UzMWFlZWNmZjBiYjc3YTAyOWZmMTQ5ZGE1MmU1NjYxNWIwMDlhNGJhNWJkNzIxNjZiMjE0ZfUxUC4=: 00:15:15.504 18:20:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmM4Zjk3NjUxM2UzMWFlZWNmZjBiYjc3YTAyOWZmMTQ5ZGE1MmU1NjYxNWIwMDlhNGJhNWJkNzIxNjZiMjE0ZfUxUC4=: 00:15:16.072 18:20:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.332 18:20:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:16.332 18:20:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.332 18:20:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.332 18:20:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.332 18:20:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:16.332 18:20:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:16.332 18:20:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:16.332 18:20:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:16.592 18:20:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:15:16.592 18:20:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:16.592 18:20:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:16.592 18:20:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:16.592 18:20:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:16.592 18:20:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.592 18:20:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.592 18:20:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.592 18:20:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.592 18:20:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.592 18:20:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.592 18:20:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.592 18:20:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.851 00:15:16.851 18:20:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:16.851 18:20:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:16.851 18:20:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.110 18:20:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.110 18:20:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.110 18:20:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.110 18:20:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.110 18:20:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.110 18:20:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:17.110 { 00:15:17.110 "cntlid": 65, 00:15:17.110 "qid": 0, 00:15:17.110 "state": "enabled", 00:15:17.110 "thread": "nvmf_tgt_poll_group_000", 00:15:17.110 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:17.110 "listen_address": { 00:15:17.110 "trtype": "RDMA", 00:15:17.110 "adrfam": "IPv4", 00:15:17.110 "traddr": "192.168.100.8", 00:15:17.110 "trsvcid": "4420" 00:15:17.110 }, 00:15:17.110 "peer_address": { 00:15:17.110 "trtype": "RDMA", 00:15:17.110 "adrfam": "IPv4", 00:15:17.110 "traddr": "192.168.100.8", 00:15:17.110 "trsvcid": "44985" 00:15:17.110 }, 00:15:17.110 "auth": { 00:15:17.110 "state": "completed", 00:15:17.110 "digest": "sha384", 00:15:17.110 "dhgroup": "ffdhe3072" 00:15:17.110 } 00:15:17.110 } 00:15:17.110 ]' 00:15:17.110 18:20:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:17.110 18:20:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:17.110 18:20:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:17.110 18:20:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:17.110 18:20:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:17.110 18:20:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.110 18:20:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.110 18:20:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.370 18:20:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjgwZDQ0ZDQ4OTZjYzRkODVkNTNiZmFlOTBlYzUwZGIzM2M2YWU5Yzk2ZmNhNTU4xFK5dw==: --dhchap-ctrl-secret DHHC-1:03:OTRiYjAzZDIwNmEwM2NmZDk0ZjQyMTg1ZTRlZjgyNjZjNWVlNTU1Y2ZiMTllNTJmOWUwMzczNDg4ZjBlODA0MwRnIXw=: 00:15:17.370 18:20:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjgwZDQ0ZDQ4OTZjYzRkODVkNTNiZmFlOTBlYzUwZGIzM2M2YWU5Yzk2ZmNhNTU4xFK5dw==: --dhchap-ctrl-secret DHHC-1:03:OTRiYjAzZDIwNmEwM2NmZDk0ZjQyMTg1ZTRlZjgyNjZjNWVlNTU1Y2ZiMTllNTJmOWUwMzczNDg4ZjBlODA0MwRnIXw=: 00:15:17.938 18:20:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.197 18:20:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:18.197 18:20:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.197 18:20:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.197 18:20:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.197 18:20:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:18.197 18:20:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:18.197 18:20:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:18.457 18:20:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:15:18.457 18:20:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:18.457 18:20:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:18.457 18:20:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:18.457 18:20:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:18.457 18:20:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.457 18:20:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.457 18:20:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.457 18:20:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.457 18:20:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.457 18:20:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.457 18:20:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.457 18:20:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.716 00:15:18.716 18:20:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:18.716 18:20:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:18.716 18:20:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.974 18:20:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.974 18:20:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.974 18:20:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.974 18:20:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.974 18:20:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.974 18:20:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:18.974 { 00:15:18.974 "cntlid": 67, 00:15:18.974 "qid": 0, 00:15:18.974 "state": "enabled", 00:15:18.974 "thread": "nvmf_tgt_poll_group_000", 00:15:18.974 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:18.974 "listen_address": { 00:15:18.974 "trtype": "RDMA", 00:15:18.974 "adrfam": "IPv4", 00:15:18.974 "traddr": "192.168.100.8", 00:15:18.974 "trsvcid": "4420" 00:15:18.974 }, 00:15:18.974 "peer_address": { 00:15:18.974 "trtype": "RDMA", 00:15:18.974 "adrfam": "IPv4", 00:15:18.974 "traddr": "192.168.100.8", 00:15:18.974 "trsvcid": "58028" 00:15:18.974 }, 00:15:18.974 "auth": { 00:15:18.974 "state": "completed", 00:15:18.974 "digest": "sha384", 00:15:18.974 "dhgroup": "ffdhe3072" 00:15:18.974 } 00:15:18.974 } 00:15:18.974 ]' 00:15:18.974 18:20:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:18.974 18:20:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:18.974 18:20:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:18.974 18:20:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:18.974 18:20:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:18.974 18:20:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.974 18:20:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.974 18:20:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.233 18:20:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjBjNzBkZGRiNWY4ZTkyOWJmNWU4NDA2NDgyM2Q3M2a16XEW: --dhchap-ctrl-secret DHHC-1:02:NzcyOTE3NjdkODIxNDkzZDZlMzgzODU5YjUxZTE4OGFkYjgzNTVjNGU1NzY3YzMwpTruLA==: 00:15:19.233 18:20:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjBjNzBkZGRiNWY4ZTkyOWJmNWU4NDA2NDgyM2Q3M2a16XEW: --dhchap-ctrl-secret DHHC-1:02:NzcyOTE3NjdkODIxNDkzZDZlMzgzODU5YjUxZTE4OGFkYjgzNTVjNGU1NzY3YzMwpTruLA==: 00:15:20.169 18:20:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.169 18:20:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:20.169 18:20:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.169 18:20:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.169 18:20:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.169 18:20:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:20.169 18:20:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:20.169 18:20:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:20.428 18:20:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:15:20.428 18:20:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:20.428 18:20:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:20.428 18:20:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:20.428 18:20:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:20.428 18:20:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.428 18:20:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.428 18:20:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.428 18:20:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.428 18:20:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.428 18:20:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.428 18:20:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.428 18:20:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.687 00:15:20.687 18:20:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:20.687 18:20:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:20.687 18:20:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.947 18:20:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.947 18:20:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.947 18:20:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.947 18:20:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.947 18:20:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.947 18:20:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.947 { 00:15:20.947 "cntlid": 69, 00:15:20.947 "qid": 0, 00:15:20.947 "state": "enabled", 00:15:20.947 "thread": "nvmf_tgt_poll_group_000", 00:15:20.947 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:20.947 "listen_address": { 00:15:20.947 "trtype": "RDMA", 00:15:20.947 "adrfam": "IPv4", 00:15:20.947 "traddr": "192.168.100.8", 00:15:20.947 "trsvcid": "4420" 00:15:20.947 }, 00:15:20.947 "peer_address": { 00:15:20.947 "trtype": "RDMA", 00:15:20.947 "adrfam": "IPv4", 00:15:20.947 "traddr": "192.168.100.8", 00:15:20.947 "trsvcid": "40177" 00:15:20.947 }, 00:15:20.947 "auth": { 00:15:20.947 "state": "completed", 00:15:20.947 "digest": "sha384", 00:15:20.947 "dhgroup": "ffdhe3072" 00:15:20.947 } 00:15:20.947 } 00:15:20.947 ]' 00:15:20.947 18:20:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.947 18:20:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:20.947 18:20:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:20.947 18:20:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:20.947 18:20:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:20.947 18:20:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.947 18:20:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.947 18:20:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.207 18:20:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: --dhchap-ctrl-secret DHHC-1:01:MmJlNjg1NjdjOWVjZTIxOTdjYjA1ZWFiNGQ4ZTU4NmOF4rQ8: 00:15:21.207 18:20:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: --dhchap-ctrl-secret DHHC-1:01:MmJlNjg1NjdjOWVjZTIxOTdjYjA1ZWFiNGQ4ZTU4NmOF4rQ8: 00:15:21.775 18:20:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.033 18:20:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:22.033 18:20:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.033 18:20:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.033 18:20:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.033 18:20:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:22.033 18:20:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:22.033 18:20:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:22.293 18:20:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:15:22.293 18:20:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:22.293 18:20:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:22.293 18:20:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:22.293 18:20:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:22.293 18:20:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.293 18:20:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key3 00:15:22.293 18:20:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.293 18:20:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.293 18:20:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.293 18:20:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:22.293 18:20:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:22.293 18:20:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:22.552 00:15:22.552 18:20:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:22.552 18:20:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:22.552 18:20:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.811 18:20:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.811 18:20:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.811 18:20:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.811 18:20:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.811 18:20:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.811 18:20:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.811 { 00:15:22.811 "cntlid": 71, 00:15:22.811 "qid": 0, 00:15:22.811 "state": "enabled", 00:15:22.811 "thread": "nvmf_tgt_poll_group_000", 00:15:22.811 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:22.811 "listen_address": { 00:15:22.811 "trtype": "RDMA", 00:15:22.811 "adrfam": "IPv4", 00:15:22.811 "traddr": "192.168.100.8", 00:15:22.811 "trsvcid": "4420" 00:15:22.811 }, 00:15:22.811 "peer_address": { 00:15:22.811 "trtype": "RDMA", 00:15:22.811 "adrfam": "IPv4", 00:15:22.811 "traddr": "192.168.100.8", 00:15:22.811 "trsvcid": "50603" 00:15:22.811 }, 00:15:22.811 "auth": { 00:15:22.811 "state": "completed", 00:15:22.811 "digest": "sha384", 00:15:22.811 "dhgroup": "ffdhe3072" 00:15:22.811 } 00:15:22.811 } 00:15:22.811 ]' 00:15:22.811 18:20:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.811 18:20:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:22.811 18:20:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.811 18:20:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:22.811 18:20:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.811 18:20:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.811 18:20:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.811 18:20:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.070 18:20:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmM4Zjk3NjUxM2UzMWFlZWNmZjBiYjc3YTAyOWZmMTQ5ZGE1MmU1NjYxNWIwMDlhNGJhNWJkNzIxNjZiMjE0ZfUxUC4=: 00:15:23.070 18:20:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmM4Zjk3NjUxM2UzMWFlZWNmZjBiYjc3YTAyOWZmMTQ5ZGE1MmU1NjYxNWIwMDlhNGJhNWJkNzIxNjZiMjE0ZfUxUC4=: 00:15:23.636 18:20:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.895 18:20:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:23.895 18:20:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.895 18:20:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.895 18:20:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.895 18:20:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:23.895 18:20:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.895 18:20:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:23.895 18:20:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:24.154 18:20:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:15:24.154 18:20:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:24.154 18:20:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:24.154 18:20:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:24.154 18:20:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:24.154 18:20:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.154 18:20:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.154 18:20:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.154 18:20:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.154 18:20:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.154 18:20:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.154 18:20:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.154 18:20:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.413 00:15:24.413 18:20:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:24.413 18:20:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:24.413 18:20:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.671 18:20:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.671 18:20:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.671 18:20:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.671 18:20:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.671 18:20:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.671 18:20:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.671 { 00:15:24.671 "cntlid": 73, 00:15:24.671 "qid": 0, 00:15:24.672 "state": "enabled", 00:15:24.672 "thread": "nvmf_tgt_poll_group_000", 00:15:24.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:24.672 "listen_address": { 00:15:24.672 "trtype": "RDMA", 00:15:24.672 "adrfam": "IPv4", 00:15:24.672 "traddr": "192.168.100.8", 00:15:24.672 "trsvcid": "4420" 00:15:24.672 }, 00:15:24.672 "peer_address": { 00:15:24.672 "trtype": "RDMA", 00:15:24.672 "adrfam": "IPv4", 00:15:24.672 "traddr": "192.168.100.8", 00:15:24.672 "trsvcid": "33801" 00:15:24.672 }, 00:15:24.672 "auth": { 00:15:24.672 "state": "completed", 00:15:24.672 "digest": "sha384", 00:15:24.672 "dhgroup": "ffdhe4096" 00:15:24.672 } 00:15:24.672 } 00:15:24.672 ]' 00:15:24.672 18:20:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.672 18:20:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:24.672 18:20:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.672 18:20:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:24.672 18:20:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.672 18:20:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.672 18:20:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.672 18:20:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.930 18:20:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjgwZDQ0ZDQ4OTZjYzRkODVkNTNiZmFlOTBlYzUwZGIzM2M2YWU5Yzk2ZmNhNTU4xFK5dw==: --dhchap-ctrl-secret DHHC-1:03:OTRiYjAzZDIwNmEwM2NmZDk0ZjQyMTg1ZTRlZjgyNjZjNWVlNTU1Y2ZiMTllNTJmOWUwMzczNDg4ZjBlODA0MwRnIXw=: 00:15:24.930 18:20:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjgwZDQ0ZDQ4OTZjYzRkODVkNTNiZmFlOTBlYzUwZGIzM2M2YWU5Yzk2ZmNhNTU4xFK5dw==: --dhchap-ctrl-secret DHHC-1:03:OTRiYjAzZDIwNmEwM2NmZDk0ZjQyMTg1ZTRlZjgyNjZjNWVlNTU1Y2ZiMTllNTJmOWUwMzczNDg4ZjBlODA0MwRnIXw=: 00:15:25.866 18:20:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.866 18:20:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:25.866 18:20:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.866 18:20:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.866 18:20:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.866 18:20:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:25.866 18:20:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:25.866 18:20:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:26.125 18:20:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:15:26.125 18:20:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:26.125 18:20:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:26.125 18:20:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:26.125 18:20:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:26.125 18:20:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.125 18:20:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.125 18:20:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.125 18:20:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.125 18:20:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.125 18:20:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.125 18:20:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.125 18:20:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.384 00:15:26.384 18:20:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.384 18:20:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.384 18:20:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.643 18:20:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.643 18:20:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.643 18:20:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.643 18:20:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.643 18:20:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.643 18:20:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.643 { 00:15:26.643 "cntlid": 75, 00:15:26.643 "qid": 0, 00:15:26.643 "state": "enabled", 00:15:26.643 "thread": "nvmf_tgt_poll_group_000", 00:15:26.643 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:26.643 "listen_address": { 00:15:26.643 "trtype": "RDMA", 00:15:26.643 "adrfam": "IPv4", 00:15:26.643 "traddr": "192.168.100.8", 00:15:26.643 "trsvcid": "4420" 00:15:26.643 }, 00:15:26.643 "peer_address": { 00:15:26.643 "trtype": "RDMA", 00:15:26.643 "adrfam": "IPv4", 00:15:26.643 "traddr": "192.168.100.8", 00:15:26.643 "trsvcid": "55324" 00:15:26.643 }, 00:15:26.643 "auth": { 00:15:26.643 "state": "completed", 00:15:26.643 "digest": "sha384", 00:15:26.643 "dhgroup": "ffdhe4096" 00:15:26.643 } 00:15:26.643 } 00:15:26.643 ]' 00:15:26.643 18:20:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.643 18:20:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:26.643 18:20:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:26.643 18:20:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:26.643 18:20:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.643 18:20:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.643 18:20:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.643 18:20:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.903 18:20:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjBjNzBkZGRiNWY4ZTkyOWJmNWU4NDA2NDgyM2Q3M2a16XEW: --dhchap-ctrl-secret DHHC-1:02:NzcyOTE3NjdkODIxNDkzZDZlMzgzODU5YjUxZTE4OGFkYjgzNTVjNGU1NzY3YzMwpTruLA==: 00:15:26.903 18:20:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjBjNzBkZGRiNWY4ZTkyOWJmNWU4NDA2NDgyM2Q3M2a16XEW: --dhchap-ctrl-secret DHHC-1:02:NzcyOTE3NjdkODIxNDkzZDZlMzgzODU5YjUxZTE4OGFkYjgzNTVjNGU1NzY3YzMwpTruLA==: 00:15:27.836 18:20:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.836 18:20:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:27.837 18:20:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.837 18:20:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.837 18:20:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.837 18:20:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:27.837 18:20:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:27.837 18:20:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:28.096 18:20:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:15:28.096 18:20:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.096 18:20:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:28.096 18:20:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:28.096 18:20:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:28.096 18:20:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.096 18:20:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.096 18:20:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.096 18:20:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.096 18:20:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.096 18:20:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.096 18:20:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.096 18:20:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.354 00:15:28.354 18:20:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:28.354 18:20:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:28.354 18:20:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.354 18:20:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.354 18:20:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.354 18:20:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.355 18:20:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.613 18:20:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.613 18:20:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:28.613 { 00:15:28.613 "cntlid": 77, 00:15:28.613 "qid": 0, 00:15:28.613 "state": "enabled", 00:15:28.613 "thread": "nvmf_tgt_poll_group_000", 00:15:28.613 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:28.613 "listen_address": { 00:15:28.613 "trtype": "RDMA", 00:15:28.613 "adrfam": "IPv4", 00:15:28.613 "traddr": "192.168.100.8", 00:15:28.613 "trsvcid": "4420" 00:15:28.613 }, 00:15:28.613 "peer_address": { 00:15:28.613 "trtype": "RDMA", 00:15:28.613 "adrfam": "IPv4", 00:15:28.613 "traddr": "192.168.100.8", 00:15:28.613 "trsvcid": "52304" 00:15:28.613 }, 00:15:28.613 "auth": { 00:15:28.613 "state": "completed", 00:15:28.613 "digest": "sha384", 00:15:28.614 "dhgroup": "ffdhe4096" 00:15:28.614 } 00:15:28.614 } 00:15:28.614 ]' 00:15:28.614 18:20:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:28.614 18:20:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:28.614 18:20:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:28.614 18:20:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:28.614 18:20:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:28.614 18:20:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.614 18:20:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.614 18:20:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.873 18:20:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: --dhchap-ctrl-secret DHHC-1:01:MmJlNjg1NjdjOWVjZTIxOTdjYjA1ZWFiNGQ4ZTU4NmOF4rQ8: 00:15:28.873 18:20:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: --dhchap-ctrl-secret DHHC-1:01:MmJlNjg1NjdjOWVjZTIxOTdjYjA1ZWFiNGQ4ZTU4NmOF4rQ8: 00:15:29.442 18:20:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.701 18:20:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:29.701 18:20:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.701 18:20:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.701 18:20:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.701 18:20:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:29.701 18:20:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:29.701 18:20:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:30.032 18:20:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:15:30.032 18:20:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:30.032 18:20:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:30.032 18:20:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:30.032 18:20:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:30.032 18:20:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.032 18:20:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key3 00:15:30.032 18:20:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.032 18:20:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.032 18:20:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.032 18:20:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:30.032 18:20:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:30.032 18:20:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:30.322 00:15:30.322 18:20:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:30.322 18:20:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:30.322 18:20:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.322 18:20:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.322 18:20:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.322 18:20:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.322 18:20:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.322 18:20:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.322 18:20:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:30.322 { 00:15:30.322 "cntlid": 79, 00:15:30.322 "qid": 0, 00:15:30.322 "state": "enabled", 00:15:30.322 "thread": "nvmf_tgt_poll_group_000", 00:15:30.322 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:30.322 "listen_address": { 00:15:30.322 "trtype": "RDMA", 00:15:30.322 "adrfam": "IPv4", 00:15:30.322 "traddr": "192.168.100.8", 00:15:30.322 "trsvcid": "4420" 00:15:30.322 }, 00:15:30.322 "peer_address": { 00:15:30.322 "trtype": "RDMA", 00:15:30.322 "adrfam": "IPv4", 00:15:30.322 "traddr": "192.168.100.8", 00:15:30.322 "trsvcid": "54954" 00:15:30.322 }, 00:15:30.322 "auth": { 00:15:30.322 "state": "completed", 00:15:30.322 "digest": "sha384", 00:15:30.322 "dhgroup": "ffdhe4096" 00:15:30.322 } 00:15:30.322 } 00:15:30.322 ]' 00:15:30.322 18:20:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:30.580 18:20:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:30.580 18:20:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:30.581 18:20:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:30.581 18:20:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:30.581 18:20:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.581 18:20:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.581 18:20:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.839 18:20:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmM4Zjk3NjUxM2UzMWFlZWNmZjBiYjc3YTAyOWZmMTQ5ZGE1MmU1NjYxNWIwMDlhNGJhNWJkNzIxNjZiMjE0ZfUxUC4=: 00:15:30.839 18:20:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmM4Zjk3NjUxM2UzMWFlZWNmZjBiYjc3YTAyOWZmMTQ5ZGE1MmU1NjYxNWIwMDlhNGJhNWJkNzIxNjZiMjE0ZfUxUC4=: 00:15:31.406 18:20:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.665 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.665 18:20:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:31.665 18:20:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.665 18:20:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.665 18:20:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.665 18:20:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:31.666 18:20:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:31.666 18:20:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:31.666 18:20:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:31.925 18:20:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:15:31.925 18:20:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:31.925 18:20:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:31.925 18:20:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:31.925 18:20:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:31.925 18:20:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.925 18:20:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.925 18:20:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.925 18:20:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.925 18:20:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.925 18:20:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.925 18:20:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.925 18:20:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.185 00:15:32.185 18:20:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:32.185 18:20:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:32.185 18:20:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.444 18:20:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.444 18:20:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.444 18:20:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.444 18:20:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.444 18:20:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.444 18:20:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.444 { 00:15:32.444 "cntlid": 81, 00:15:32.444 "qid": 0, 00:15:32.444 "state": "enabled", 00:15:32.444 "thread": "nvmf_tgt_poll_group_000", 00:15:32.444 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:32.444 "listen_address": { 00:15:32.444 "trtype": "RDMA", 00:15:32.444 "adrfam": "IPv4", 00:15:32.444 "traddr": "192.168.100.8", 00:15:32.444 "trsvcid": "4420" 00:15:32.444 }, 00:15:32.444 "peer_address": { 00:15:32.444 "trtype": "RDMA", 00:15:32.444 "adrfam": "IPv4", 00:15:32.444 "traddr": "192.168.100.8", 00:15:32.444 "trsvcid": "51681" 00:15:32.444 }, 00:15:32.444 "auth": { 00:15:32.444 "state": "completed", 00:15:32.444 "digest": "sha384", 00:15:32.444 "dhgroup": "ffdhe6144" 00:15:32.444 } 00:15:32.444 } 00:15:32.444 ]' 00:15:32.444 18:20:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.444 18:20:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:32.444 18:20:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:32.444 18:20:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:32.444 18:20:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:32.444 18:20:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.444 18:20:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.444 18:20:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.703 18:20:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjgwZDQ0ZDQ4OTZjYzRkODVkNTNiZmFlOTBlYzUwZGIzM2M2YWU5Yzk2ZmNhNTU4xFK5dw==: --dhchap-ctrl-secret DHHC-1:03:OTRiYjAzZDIwNmEwM2NmZDk0ZjQyMTg1ZTRlZjgyNjZjNWVlNTU1Y2ZiMTllNTJmOWUwMzczNDg4ZjBlODA0MwRnIXw=: 00:15:32.703 18:20:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjgwZDQ0ZDQ4OTZjYzRkODVkNTNiZmFlOTBlYzUwZGIzM2M2YWU5Yzk2ZmNhNTU4xFK5dw==: --dhchap-ctrl-secret DHHC-1:03:OTRiYjAzZDIwNmEwM2NmZDk0ZjQyMTg1ZTRlZjgyNjZjNWVlNTU1Y2ZiMTllNTJmOWUwMzczNDg4ZjBlODA0MwRnIXw=: 00:15:33.639 18:20:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.639 18:20:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:33.639 18:20:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.639 18:20:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.639 18:20:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.639 18:20:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:33.639 18:20:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:33.639 18:20:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:33.898 18:20:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:15:33.898 18:20:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:33.898 18:20:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:33.898 18:20:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:33.898 18:20:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:33.898 18:20:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.898 18:20:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.898 18:20:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.898 18:20:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.898 18:20:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.898 18:20:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.898 18:20:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.898 18:20:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.157 00:15:34.157 18:20:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:34.157 18:20:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:34.157 18:20:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.417 18:20:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.417 18:20:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.417 18:20:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.417 18:20:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.417 18:20:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.417 18:20:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:34.417 { 00:15:34.417 "cntlid": 83, 00:15:34.417 "qid": 0, 00:15:34.417 "state": "enabled", 00:15:34.417 "thread": "nvmf_tgt_poll_group_000", 00:15:34.417 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:34.417 "listen_address": { 00:15:34.417 "trtype": "RDMA", 00:15:34.417 "adrfam": "IPv4", 00:15:34.417 "traddr": "192.168.100.8", 00:15:34.417 "trsvcid": "4420" 00:15:34.417 }, 00:15:34.417 "peer_address": { 00:15:34.417 "trtype": "RDMA", 00:15:34.417 "adrfam": "IPv4", 00:15:34.417 "traddr": "192.168.100.8", 00:15:34.417 "trsvcid": "34387" 00:15:34.417 }, 00:15:34.417 "auth": { 00:15:34.417 "state": "completed", 00:15:34.417 "digest": "sha384", 00:15:34.417 "dhgroup": "ffdhe6144" 00:15:34.417 } 00:15:34.417 } 00:15:34.417 ]' 00:15:34.417 18:20:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:34.417 18:20:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:34.417 18:20:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.417 18:20:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:34.417 18:20:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.417 18:20:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.417 18:20:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.417 18:20:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.677 18:20:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjBjNzBkZGRiNWY4ZTkyOWJmNWU4NDA2NDgyM2Q3M2a16XEW: --dhchap-ctrl-secret DHHC-1:02:NzcyOTE3NjdkODIxNDkzZDZlMzgzODU5YjUxZTE4OGFkYjgzNTVjNGU1NzY3YzMwpTruLA==: 00:15:34.677 18:20:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjBjNzBkZGRiNWY4ZTkyOWJmNWU4NDA2NDgyM2Q3M2a16XEW: --dhchap-ctrl-secret DHHC-1:02:NzcyOTE3NjdkODIxNDkzZDZlMzgzODU5YjUxZTE4OGFkYjgzNTVjNGU1NzY3YzMwpTruLA==: 00:15:35.613 18:20:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.613 18:20:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:35.614 18:20:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.614 18:20:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.614 18:20:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.614 18:20:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:35.614 18:20:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:35.614 18:20:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:35.872 18:20:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:15:35.872 18:20:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.872 18:20:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:35.872 18:20:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:35.872 18:20:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:35.872 18:20:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.872 18:20:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.872 18:20:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.872 18:20:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.872 18:20:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.872 18:20:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.872 18:20:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.872 18:20:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.130 00:15:36.130 18:20:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:36.130 18:20:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:36.130 18:20:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.389 18:20:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.389 18:20:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.389 18:20:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.389 18:20:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.389 18:20:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.389 18:20:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:36.389 { 00:15:36.389 "cntlid": 85, 00:15:36.389 "qid": 0, 00:15:36.389 "state": "enabled", 00:15:36.389 "thread": "nvmf_tgt_poll_group_000", 00:15:36.389 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:36.389 "listen_address": { 00:15:36.389 "trtype": "RDMA", 00:15:36.389 "adrfam": "IPv4", 00:15:36.389 "traddr": "192.168.100.8", 00:15:36.389 "trsvcid": "4420" 00:15:36.389 }, 00:15:36.389 "peer_address": { 00:15:36.389 "trtype": "RDMA", 00:15:36.389 "adrfam": "IPv4", 00:15:36.389 "traddr": "192.168.100.8", 00:15:36.389 "trsvcid": "38713" 00:15:36.389 }, 00:15:36.389 "auth": { 00:15:36.389 "state": "completed", 00:15:36.389 "digest": "sha384", 00:15:36.389 "dhgroup": "ffdhe6144" 00:15:36.389 } 00:15:36.389 } 00:15:36.389 ]' 00:15:36.389 18:20:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:36.389 18:20:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:36.389 18:20:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:36.389 18:20:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:36.390 18:20:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:36.649 18:20:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.649 18:20:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.649 18:20:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.649 18:20:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: --dhchap-ctrl-secret DHHC-1:01:MmJlNjg1NjdjOWVjZTIxOTdjYjA1ZWFiNGQ4ZTU4NmOF4rQ8: 00:15:36.649 18:20:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: --dhchap-ctrl-secret DHHC-1:01:MmJlNjg1NjdjOWVjZTIxOTdjYjA1ZWFiNGQ4ZTU4NmOF4rQ8: 00:15:38.027 18:20:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.027 18:20:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:38.027 18:20:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.027 18:20:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.027 18:20:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.027 18:20:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.027 18:20:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:38.028 18:20:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:38.286 18:20:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:15:38.286 18:20:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:38.286 18:20:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:38.286 18:20:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:38.286 18:20:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:38.286 18:20:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.286 18:20:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key3 00:15:38.286 18:20:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.286 18:20:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.286 18:20:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.286 18:20:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:38.286 18:20:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:38.286 18:20:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:38.546 00:15:38.546 18:20:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:38.546 18:20:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:38.546 18:20:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.805 18:20:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.805 18:20:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.805 18:20:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.805 18:20:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.805 18:20:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.805 18:20:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:38.805 { 00:15:38.805 "cntlid": 87, 00:15:38.805 "qid": 0, 00:15:38.805 "state": "enabled", 00:15:38.805 "thread": "nvmf_tgt_poll_group_000", 00:15:38.805 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:38.805 "listen_address": { 00:15:38.805 "trtype": "RDMA", 00:15:38.805 "adrfam": "IPv4", 00:15:38.805 "traddr": "192.168.100.8", 00:15:38.805 "trsvcid": "4420" 00:15:38.805 }, 00:15:38.805 "peer_address": { 00:15:38.805 "trtype": "RDMA", 00:15:38.805 "adrfam": "IPv4", 00:15:38.805 "traddr": "192.168.100.8", 00:15:38.805 "trsvcid": "57988" 00:15:38.805 }, 00:15:38.805 "auth": { 00:15:38.805 "state": "completed", 00:15:38.805 "digest": "sha384", 00:15:38.805 "dhgroup": "ffdhe6144" 00:15:38.805 } 00:15:38.805 } 00:15:38.805 ]' 00:15:38.805 18:20:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:38.805 18:20:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:38.805 18:20:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:38.805 18:20:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:38.805 18:20:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.065 18:20:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.065 18:20:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.065 18:20:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.065 18:20:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmM4Zjk3NjUxM2UzMWFlZWNmZjBiYjc3YTAyOWZmMTQ5ZGE1MmU1NjYxNWIwMDlhNGJhNWJkNzIxNjZiMjE0ZfUxUC4=: 00:15:39.065 18:20:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmM4Zjk3NjUxM2UzMWFlZWNmZjBiYjc3YTAyOWZmMTQ5ZGE1MmU1NjYxNWIwMDlhNGJhNWJkNzIxNjZiMjE0ZfUxUC4=: 00:15:40.441 18:20:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.441 18:20:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:40.441 18:20:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.441 18:20:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.441 18:20:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.441 18:20:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:40.441 18:20:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:40.441 18:20:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:40.442 18:20:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:40.701 18:20:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:15:40.701 18:20:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:40.701 18:20:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:40.701 18:20:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:40.701 18:20:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:40.701 18:20:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.701 18:20:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.701 18:20:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.701 18:20:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.701 18:20:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.701 18:20:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.701 18:20:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.701 18:20:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.270 00:15:41.270 18:20:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:41.270 18:20:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:41.270 18:20:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.270 18:20:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.270 18:20:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.270 18:20:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.270 18:20:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.270 18:20:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.270 18:20:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:41.270 { 00:15:41.270 "cntlid": 89, 00:15:41.270 "qid": 0, 00:15:41.270 "state": "enabled", 00:15:41.270 "thread": "nvmf_tgt_poll_group_000", 00:15:41.270 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:41.270 "listen_address": { 00:15:41.270 "trtype": "RDMA", 00:15:41.270 "adrfam": "IPv4", 00:15:41.270 "traddr": "192.168.100.8", 00:15:41.270 "trsvcid": "4420" 00:15:41.270 }, 00:15:41.270 "peer_address": { 00:15:41.270 "trtype": "RDMA", 00:15:41.270 "adrfam": "IPv4", 00:15:41.270 "traddr": "192.168.100.8", 00:15:41.270 "trsvcid": "47476" 00:15:41.270 }, 00:15:41.270 "auth": { 00:15:41.270 "state": "completed", 00:15:41.270 "digest": "sha384", 00:15:41.270 "dhgroup": "ffdhe8192" 00:15:41.270 } 00:15:41.270 } 00:15:41.270 ]' 00:15:41.270 18:20:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:41.529 18:20:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:41.529 18:20:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:41.529 18:20:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:41.529 18:20:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:41.529 18:20:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.529 18:20:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.529 18:20:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.789 18:20:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjgwZDQ0ZDQ4OTZjYzRkODVkNTNiZmFlOTBlYzUwZGIzM2M2YWU5Yzk2ZmNhNTU4xFK5dw==: --dhchap-ctrl-secret DHHC-1:03:OTRiYjAzZDIwNmEwM2NmZDk0ZjQyMTg1ZTRlZjgyNjZjNWVlNTU1Y2ZiMTllNTJmOWUwMzczNDg4ZjBlODA0MwRnIXw=: 00:15:41.789 18:20:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjgwZDQ0ZDQ4OTZjYzRkODVkNTNiZmFlOTBlYzUwZGIzM2M2YWU5Yzk2ZmNhNTU4xFK5dw==: --dhchap-ctrl-secret DHHC-1:03:OTRiYjAzZDIwNmEwM2NmZDk0ZjQyMTg1ZTRlZjgyNjZjNWVlNTU1Y2ZiMTllNTJmOWUwMzczNDg4ZjBlODA0MwRnIXw=: 00:15:42.725 18:20:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.984 18:20:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:42.984 18:20:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.984 18:20:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.984 18:20:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.984 18:20:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:42.984 18:20:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:42.984 18:20:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:43.244 18:20:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:15:43.244 18:20:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.244 18:20:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:43.244 18:20:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:43.244 18:20:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:43.244 18:20:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.244 18:20:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.244 18:20:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.244 18:20:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.244 18:20:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.244 18:20:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.244 18:20:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.244 18:20:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.814 00:15:43.814 18:20:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:43.814 18:20:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:43.814 18:20:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.814 18:20:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.814 18:20:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.814 18:20:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.814 18:20:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.814 18:20:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.074 18:20:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.074 { 00:15:44.074 "cntlid": 91, 00:15:44.074 "qid": 0, 00:15:44.074 "state": "enabled", 00:15:44.074 "thread": "nvmf_tgt_poll_group_000", 00:15:44.074 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:44.074 "listen_address": { 00:15:44.074 "trtype": "RDMA", 00:15:44.074 "adrfam": "IPv4", 00:15:44.074 "traddr": "192.168.100.8", 00:15:44.074 "trsvcid": "4420" 00:15:44.074 }, 00:15:44.074 "peer_address": { 00:15:44.074 "trtype": "RDMA", 00:15:44.074 "adrfam": "IPv4", 00:15:44.074 "traddr": "192.168.100.8", 00:15:44.074 "trsvcid": "39986" 00:15:44.074 }, 00:15:44.074 "auth": { 00:15:44.074 "state": "completed", 00:15:44.074 "digest": "sha384", 00:15:44.074 "dhgroup": "ffdhe8192" 00:15:44.074 } 00:15:44.074 } 00:15:44.074 ]' 00:15:44.074 18:20:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.074 18:20:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:44.074 18:20:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.074 18:20:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:44.074 18:20:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.074 18:20:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.074 18:20:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.074 18:20:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.333 18:20:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjBjNzBkZGRiNWY4ZTkyOWJmNWU4NDA2NDgyM2Q3M2a16XEW: --dhchap-ctrl-secret DHHC-1:02:NzcyOTE3NjdkODIxNDkzZDZlMzgzODU5YjUxZTE4OGFkYjgzNTVjNGU1NzY3YzMwpTruLA==: 00:15:44.333 18:20:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjBjNzBkZGRiNWY4ZTkyOWJmNWU4NDA2NDgyM2Q3M2a16XEW: --dhchap-ctrl-secret DHHC-1:02:NzcyOTE3NjdkODIxNDkzZDZlMzgzODU5YjUxZTE4OGFkYjgzNTVjNGU1NzY3YzMwpTruLA==: 00:15:45.268 18:20:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.526 18:20:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:45.526 18:20:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.526 18:20:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.526 18:20:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.526 18:20:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:45.526 18:20:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:45.526 18:20:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:45.785 18:20:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:15:45.785 18:20:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:45.785 18:20:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:45.785 18:20:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:45.785 18:20:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:45.785 18:20:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.785 18:20:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.785 18:20:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.785 18:20:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.785 18:20:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.785 18:20:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.785 18:20:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.785 18:20:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.354 00:15:46.354 18:20:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:46.354 18:20:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:46.354 18:20:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.613 18:20:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.613 18:20:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.613 18:20:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.613 18:20:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.613 18:20:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.613 18:20:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:46.613 { 00:15:46.613 "cntlid": 93, 00:15:46.613 "qid": 0, 00:15:46.613 "state": "enabled", 00:15:46.613 "thread": "nvmf_tgt_poll_group_000", 00:15:46.613 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:46.613 "listen_address": { 00:15:46.613 "trtype": "RDMA", 00:15:46.613 "adrfam": "IPv4", 00:15:46.613 "traddr": "192.168.100.8", 00:15:46.613 "trsvcid": "4420" 00:15:46.613 }, 00:15:46.613 "peer_address": { 00:15:46.613 "trtype": "RDMA", 00:15:46.613 "adrfam": "IPv4", 00:15:46.613 "traddr": "192.168.100.8", 00:15:46.613 "trsvcid": "35091" 00:15:46.613 }, 00:15:46.613 "auth": { 00:15:46.613 "state": "completed", 00:15:46.613 "digest": "sha384", 00:15:46.613 "dhgroup": "ffdhe8192" 00:15:46.613 } 00:15:46.613 } 00:15:46.613 ]' 00:15:46.613 18:20:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:46.613 18:20:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:46.613 18:20:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:46.613 18:20:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:46.613 18:20:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:46.613 18:20:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.613 18:20:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.613 18:20:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.873 18:20:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: --dhchap-ctrl-secret DHHC-1:01:MmJlNjg1NjdjOWVjZTIxOTdjYjA1ZWFiNGQ4ZTU4NmOF4rQ8: 00:15:46.873 18:20:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: --dhchap-ctrl-secret DHHC-1:01:MmJlNjg1NjdjOWVjZTIxOTdjYjA1ZWFiNGQ4ZTU4NmOF4rQ8: 00:15:48.249 18:21:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.249 18:21:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:48.249 18:21:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.249 18:21:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.249 18:21:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.250 18:21:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.250 18:21:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:48.250 18:21:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:48.509 18:21:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:15:48.509 18:21:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.509 18:21:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:48.509 18:21:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:48.509 18:21:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:48.509 18:21:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.509 18:21:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key3 00:15:48.509 18:21:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.509 18:21:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.509 18:21:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.509 18:21:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:48.509 18:21:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:48.509 18:21:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:49.078 00:15:49.078 18:21:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.078 18:21:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.078 18:21:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.078 18:21:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.078 18:21:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.078 18:21:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.078 18:21:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.078 18:21:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.078 18:21:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.078 { 00:15:49.078 "cntlid": 95, 00:15:49.078 "qid": 0, 00:15:49.078 "state": "enabled", 00:15:49.078 "thread": "nvmf_tgt_poll_group_000", 00:15:49.078 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:49.078 "listen_address": { 00:15:49.078 "trtype": "RDMA", 00:15:49.078 "adrfam": "IPv4", 00:15:49.078 "traddr": "192.168.100.8", 00:15:49.078 "trsvcid": "4420" 00:15:49.078 }, 00:15:49.078 "peer_address": { 00:15:49.078 "trtype": "RDMA", 00:15:49.078 "adrfam": "IPv4", 00:15:49.078 "traddr": "192.168.100.8", 00:15:49.078 "trsvcid": "47908" 00:15:49.078 }, 00:15:49.078 "auth": { 00:15:49.078 "state": "completed", 00:15:49.078 "digest": "sha384", 00:15:49.078 "dhgroup": "ffdhe8192" 00:15:49.078 } 00:15:49.078 } 00:15:49.078 ]' 00:15:49.078 18:21:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.078 18:21:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:49.078 18:21:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.338 18:21:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:49.338 18:21:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.338 18:21:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.338 18:21:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.338 18:21:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.597 18:21:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmM4Zjk3NjUxM2UzMWFlZWNmZjBiYjc3YTAyOWZmMTQ5ZGE1MmU1NjYxNWIwMDlhNGJhNWJkNzIxNjZiMjE0ZfUxUC4=: 00:15:49.597 18:21:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmM4Zjk3NjUxM2UzMWFlZWNmZjBiYjc3YTAyOWZmMTQ5ZGE1MmU1NjYxNWIwMDlhNGJhNWJkNzIxNjZiMjE0ZfUxUC4=: 00:15:50.534 18:21:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.794 18:21:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:50.794 18:21:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.794 18:21:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.794 18:21:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.794 18:21:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:50.794 18:21:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:50.794 18:21:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.794 18:21:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:50.794 18:21:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:51.053 18:21:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:15:51.053 18:21:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.053 18:21:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:51.053 18:21:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:51.053 18:21:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:51.053 18:21:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.053 18:21:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.053 18:21:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.053 18:21:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.053 18:21:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.053 18:21:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.053 18:21:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.053 18:21:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.311 00:15:51.311 18:21:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.311 18:21:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.311 18:21:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.311 18:21:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.311 18:21:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.311 18:21:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.311 18:21:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.570 18:21:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.570 18:21:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.570 { 00:15:51.570 "cntlid": 97, 00:15:51.570 "qid": 0, 00:15:51.570 "state": "enabled", 00:15:51.570 "thread": "nvmf_tgt_poll_group_000", 00:15:51.570 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:51.570 "listen_address": { 00:15:51.570 "trtype": "RDMA", 00:15:51.570 "adrfam": "IPv4", 00:15:51.570 "traddr": "192.168.100.8", 00:15:51.570 "trsvcid": "4420" 00:15:51.570 }, 00:15:51.570 "peer_address": { 00:15:51.570 "trtype": "RDMA", 00:15:51.570 "adrfam": "IPv4", 00:15:51.570 "traddr": "192.168.100.8", 00:15:51.570 "trsvcid": "57959" 00:15:51.570 }, 00:15:51.570 "auth": { 00:15:51.570 "state": "completed", 00:15:51.570 "digest": "sha512", 00:15:51.570 "dhgroup": "null" 00:15:51.570 } 00:15:51.570 } 00:15:51.570 ]' 00:15:51.570 18:21:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.570 18:21:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:51.570 18:21:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.570 18:21:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:51.570 18:21:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:51.570 18:21:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.570 18:21:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.570 18:21:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.830 18:21:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjgwZDQ0ZDQ4OTZjYzRkODVkNTNiZmFlOTBlYzUwZGIzM2M2YWU5Yzk2ZmNhNTU4xFK5dw==: --dhchap-ctrl-secret DHHC-1:03:OTRiYjAzZDIwNmEwM2NmZDk0ZjQyMTg1ZTRlZjgyNjZjNWVlNTU1Y2ZiMTllNTJmOWUwMzczNDg4ZjBlODA0MwRnIXw=: 00:15:51.830 18:21:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjgwZDQ0ZDQ4OTZjYzRkODVkNTNiZmFlOTBlYzUwZGIzM2M2YWU5Yzk2ZmNhNTU4xFK5dw==: --dhchap-ctrl-secret DHHC-1:03:OTRiYjAzZDIwNmEwM2NmZDk0ZjQyMTg1ZTRlZjgyNjZjNWVlNTU1Y2ZiMTllNTJmOWUwMzczNDg4ZjBlODA0MwRnIXw=: 00:15:52.765 18:21:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.023 18:21:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:53.023 18:21:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.023 18:21:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.023 18:21:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.023 18:21:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.023 18:21:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:53.023 18:21:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:53.281 18:21:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:15:53.282 18:21:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.282 18:21:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:53.282 18:21:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:53.282 18:21:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:53.282 18:21:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.282 18:21:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.282 18:21:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.282 18:21:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.282 18:21:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.282 18:21:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.282 18:21:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.282 18:21:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.539 00:15:53.539 18:21:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.540 18:21:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.540 18:21:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.798 18:21:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.798 18:21:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.798 18:21:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.798 18:21:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.798 18:21:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.798 18:21:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.798 { 00:15:53.798 "cntlid": 99, 00:15:53.798 "qid": 0, 00:15:53.798 "state": "enabled", 00:15:53.798 "thread": "nvmf_tgt_poll_group_000", 00:15:53.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:53.798 "listen_address": { 00:15:53.798 "trtype": "RDMA", 00:15:53.798 "adrfam": "IPv4", 00:15:53.798 "traddr": "192.168.100.8", 00:15:53.798 "trsvcid": "4420" 00:15:53.798 }, 00:15:53.798 "peer_address": { 00:15:53.798 "trtype": "RDMA", 00:15:53.798 "adrfam": "IPv4", 00:15:53.798 "traddr": "192.168.100.8", 00:15:53.798 "trsvcid": "57134" 00:15:53.798 }, 00:15:53.798 "auth": { 00:15:53.798 "state": "completed", 00:15:53.798 "digest": "sha512", 00:15:53.798 "dhgroup": "null" 00:15:53.798 } 00:15:53.798 } 00:15:53.798 ]' 00:15:53.798 18:21:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.798 18:21:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:53.798 18:21:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.798 18:21:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:53.798 18:21:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.798 18:21:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.798 18:21:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.798 18:21:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.056 18:21:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjBjNzBkZGRiNWY4ZTkyOWJmNWU4NDA2NDgyM2Q3M2a16XEW: --dhchap-ctrl-secret DHHC-1:02:NzcyOTE3NjdkODIxNDkzZDZlMzgzODU5YjUxZTE4OGFkYjgzNTVjNGU1NzY3YzMwpTruLA==: 00:15:54.056 18:21:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjBjNzBkZGRiNWY4ZTkyOWJmNWU4NDA2NDgyM2Q3M2a16XEW: --dhchap-ctrl-secret DHHC-1:02:NzcyOTE3NjdkODIxNDkzZDZlMzgzODU5YjUxZTE4OGFkYjgzNTVjNGU1NzY3YzMwpTruLA==: 00:15:55.432 18:21:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.432 18:21:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:55.432 18:21:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.432 18:21:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.432 18:21:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.432 18:21:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:55.432 18:21:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:55.432 18:21:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:55.691 18:21:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:15:55.691 18:21:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.691 18:21:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:55.691 18:21:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:55.691 18:21:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:55.691 18:21:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.691 18:21:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.691 18:21:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.691 18:21:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.691 18:21:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.691 18:21:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.691 18:21:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.691 18:21:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.951 00:15:55.951 18:21:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.951 18:21:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.951 18:21:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.210 18:21:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.210 18:21:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.210 18:21:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.210 18:21:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.210 18:21:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.210 18:21:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.210 { 00:15:56.210 "cntlid": 101, 00:15:56.210 "qid": 0, 00:15:56.210 "state": "enabled", 00:15:56.210 "thread": "nvmf_tgt_poll_group_000", 00:15:56.210 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:56.210 "listen_address": { 00:15:56.210 "trtype": "RDMA", 00:15:56.210 "adrfam": "IPv4", 00:15:56.210 "traddr": "192.168.100.8", 00:15:56.210 "trsvcid": "4420" 00:15:56.210 }, 00:15:56.210 "peer_address": { 00:15:56.210 "trtype": "RDMA", 00:15:56.210 "adrfam": "IPv4", 00:15:56.210 "traddr": "192.168.100.8", 00:15:56.210 "trsvcid": "40858" 00:15:56.210 }, 00:15:56.210 "auth": { 00:15:56.210 "state": "completed", 00:15:56.210 "digest": "sha512", 00:15:56.210 "dhgroup": "null" 00:15:56.210 } 00:15:56.210 } 00:15:56.210 ]' 00:15:56.210 18:21:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.210 18:21:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:56.210 18:21:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.210 18:21:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:56.210 18:21:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.210 18:21:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.210 18:21:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.210 18:21:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.469 18:21:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: --dhchap-ctrl-secret DHHC-1:01:MmJlNjg1NjdjOWVjZTIxOTdjYjA1ZWFiNGQ4ZTU4NmOF4rQ8: 00:15:56.469 18:21:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: --dhchap-ctrl-secret DHHC-1:01:MmJlNjg1NjdjOWVjZTIxOTdjYjA1ZWFiNGQ4ZTU4NmOF4rQ8: 00:15:57.848 18:21:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.848 18:21:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:57.848 18:21:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.848 18:21:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.848 18:21:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.848 18:21:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.848 18:21:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:57.848 18:21:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:57.848 18:21:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:15:57.848 18:21:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.848 18:21:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:57.848 18:21:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:57.848 18:21:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:57.848 18:21:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.848 18:21:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key3 00:15:57.848 18:21:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.848 18:21:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.848 18:21:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.848 18:21:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:57.848 18:21:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:57.848 18:21:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:58.107 00:15:58.107 18:21:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.107 18:21:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.107 18:21:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.366 18:21:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.366 18:21:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.366 18:21:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.366 18:21:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.366 18:21:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.366 18:21:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.366 { 00:15:58.366 "cntlid": 103, 00:15:58.366 "qid": 0, 00:15:58.366 "state": "enabled", 00:15:58.366 "thread": "nvmf_tgt_poll_group_000", 00:15:58.366 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:58.366 "listen_address": { 00:15:58.366 "trtype": "RDMA", 00:15:58.366 "adrfam": "IPv4", 00:15:58.366 "traddr": "192.168.100.8", 00:15:58.366 "trsvcid": "4420" 00:15:58.366 }, 00:15:58.366 "peer_address": { 00:15:58.366 "trtype": "RDMA", 00:15:58.366 "adrfam": "IPv4", 00:15:58.366 "traddr": "192.168.100.8", 00:15:58.366 "trsvcid": "43035" 00:15:58.366 }, 00:15:58.366 "auth": { 00:15:58.366 "state": "completed", 00:15:58.366 "digest": "sha512", 00:15:58.366 "dhgroup": "null" 00:15:58.366 } 00:15:58.366 } 00:15:58.366 ]' 00:15:58.366 18:21:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.366 18:21:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:58.366 18:21:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.625 18:21:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:58.625 18:21:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.625 18:21:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.625 18:21:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.625 18:21:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.884 18:21:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmM4Zjk3NjUxM2UzMWFlZWNmZjBiYjc3YTAyOWZmMTQ5ZGE1MmU1NjYxNWIwMDlhNGJhNWJkNzIxNjZiMjE0ZfUxUC4=: 00:15:58.884 18:21:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmM4Zjk3NjUxM2UzMWFlZWNmZjBiYjc3YTAyOWZmMTQ5ZGE1MmU1NjYxNWIwMDlhNGJhNWJkNzIxNjZiMjE0ZfUxUC4=: 00:15:59.822 18:21:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.081 18:21:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:16:00.081 18:21:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.081 18:21:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.081 18:21:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.081 18:21:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:00.081 18:21:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.081 18:21:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:00.081 18:21:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:00.081 18:21:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:00.081 18:21:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.081 18:21:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:00.081 18:21:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:00.081 18:21:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:00.081 18:21:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.081 18:21:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.081 18:21:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.081 18:21:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.341 18:21:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.341 18:21:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.341 18:21:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.341 18:21:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.341 00:16:00.619 18:21:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.619 18:21:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.619 18:21:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.619 18:21:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.619 18:21:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.619 18:21:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.619 18:21:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.619 18:21:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.619 18:21:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.619 { 00:16:00.619 "cntlid": 105, 00:16:00.619 "qid": 0, 00:16:00.619 "state": "enabled", 00:16:00.619 "thread": "nvmf_tgt_poll_group_000", 00:16:00.619 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:16:00.619 "listen_address": { 00:16:00.619 "trtype": "RDMA", 00:16:00.619 "adrfam": "IPv4", 00:16:00.619 "traddr": "192.168.100.8", 00:16:00.619 "trsvcid": "4420" 00:16:00.619 }, 00:16:00.619 "peer_address": { 00:16:00.619 "trtype": "RDMA", 00:16:00.619 "adrfam": "IPv4", 00:16:00.619 "traddr": "192.168.100.8", 00:16:00.619 "trsvcid": "55727" 00:16:00.619 }, 00:16:00.619 "auth": { 00:16:00.619 "state": "completed", 00:16:00.619 "digest": "sha512", 00:16:00.619 "dhgroup": "ffdhe2048" 00:16:00.619 } 00:16:00.619 } 00:16:00.619 ]' 00:16:00.619 18:21:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.885 18:21:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:00.885 18:21:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.885 18:21:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:00.886 18:21:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.886 18:21:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.886 18:21:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.886 18:21:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.145 18:21:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjgwZDQ0ZDQ4OTZjYzRkODVkNTNiZmFlOTBlYzUwZGIzM2M2YWU5Yzk2ZmNhNTU4xFK5dw==: --dhchap-ctrl-secret DHHC-1:03:OTRiYjAzZDIwNmEwM2NmZDk0ZjQyMTg1ZTRlZjgyNjZjNWVlNTU1Y2ZiMTllNTJmOWUwMzczNDg4ZjBlODA0MwRnIXw=: 00:16:01.145 18:21:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjgwZDQ0ZDQ4OTZjYzRkODVkNTNiZmFlOTBlYzUwZGIzM2M2YWU5Yzk2ZmNhNTU4xFK5dw==: --dhchap-ctrl-secret DHHC-1:03:OTRiYjAzZDIwNmEwM2NmZDk0ZjQyMTg1ZTRlZjgyNjZjNWVlNTU1Y2ZiMTllNTJmOWUwMzczNDg4ZjBlODA0MwRnIXw=: 00:16:02.081 18:21:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.340 18:21:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:16:02.340 18:21:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.340 18:21:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.340 18:21:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.340 18:21:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.340 18:21:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:02.340 18:21:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:02.600 18:21:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:02.600 18:21:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.600 18:21:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:02.600 18:21:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:02.600 18:21:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:02.600 18:21:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.600 18:21:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.600 18:21:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.600 18:21:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.600 18:21:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.600 18:21:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.600 18:21:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.600 18:21:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.859 00:16:02.859 18:21:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:02.859 18:21:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:02.859 18:21:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.117 18:21:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.117 18:21:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.117 18:21:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.117 18:21:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.117 18:21:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.117 18:21:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.117 { 00:16:03.117 "cntlid": 107, 00:16:03.117 "qid": 0, 00:16:03.117 "state": "enabled", 00:16:03.117 "thread": "nvmf_tgt_poll_group_000", 00:16:03.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:16:03.117 "listen_address": { 00:16:03.117 "trtype": "RDMA", 00:16:03.117 "adrfam": "IPv4", 00:16:03.117 "traddr": "192.168.100.8", 00:16:03.117 "trsvcid": "4420" 00:16:03.117 }, 00:16:03.117 "peer_address": { 00:16:03.117 "trtype": "RDMA", 00:16:03.117 "adrfam": "IPv4", 00:16:03.117 "traddr": "192.168.100.8", 00:16:03.117 "trsvcid": "53832" 00:16:03.117 }, 00:16:03.117 "auth": { 00:16:03.117 "state": "completed", 00:16:03.117 "digest": "sha512", 00:16:03.117 "dhgroup": "ffdhe2048" 00:16:03.117 } 00:16:03.117 } 00:16:03.117 ]' 00:16:03.117 18:21:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.118 18:21:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:03.118 18:21:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.118 18:21:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:03.118 18:21:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.118 18:21:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.118 18:21:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.118 18:21:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.377 18:21:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjBjNzBkZGRiNWY4ZTkyOWJmNWU4NDA2NDgyM2Q3M2a16XEW: --dhchap-ctrl-secret DHHC-1:02:NzcyOTE3NjdkODIxNDkzZDZlMzgzODU5YjUxZTE4OGFkYjgzNTVjNGU1NzY3YzMwpTruLA==: 00:16:03.377 18:21:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjBjNzBkZGRiNWY4ZTkyOWJmNWU4NDA2NDgyM2Q3M2a16XEW: --dhchap-ctrl-secret DHHC-1:02:NzcyOTE3NjdkODIxNDkzZDZlMzgzODU5YjUxZTE4OGFkYjgzNTVjNGU1NzY3YzMwpTruLA==: 00:16:04.753 18:21:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.753 18:21:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:16:04.753 18:21:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.753 18:21:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.753 18:21:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.753 18:21:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.753 18:21:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:04.753 18:21:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:05.014 18:21:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:05.014 18:21:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.014 18:21:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:05.014 18:21:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:05.014 18:21:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:05.014 18:21:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.014 18:21:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.014 18:21:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.014 18:21:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.014 18:21:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.014 18:21:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.014 18:21:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.014 18:21:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.277 00:16:05.277 18:21:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.277 18:21:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.277 18:21:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.277 18:21:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.277 18:21:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.277 18:21:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.277 18:21:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.277 18:21:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.277 18:21:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.277 { 00:16:05.277 "cntlid": 109, 00:16:05.277 "qid": 0, 00:16:05.277 "state": "enabled", 00:16:05.277 "thread": "nvmf_tgt_poll_group_000", 00:16:05.277 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:16:05.277 "listen_address": { 00:16:05.277 "trtype": "RDMA", 00:16:05.277 "adrfam": "IPv4", 00:16:05.277 "traddr": "192.168.100.8", 00:16:05.277 "trsvcid": "4420" 00:16:05.277 }, 00:16:05.277 "peer_address": { 00:16:05.277 "trtype": "RDMA", 00:16:05.277 "adrfam": "IPv4", 00:16:05.277 "traddr": "192.168.100.8", 00:16:05.277 "trsvcid": "49463" 00:16:05.277 }, 00:16:05.277 "auth": { 00:16:05.277 "state": "completed", 00:16:05.277 "digest": "sha512", 00:16:05.277 "dhgroup": "ffdhe2048" 00:16:05.277 } 00:16:05.277 } 00:16:05.277 ]' 00:16:05.277 18:21:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.537 18:21:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:05.537 18:21:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.537 18:21:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:05.537 18:21:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.537 18:21:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.537 18:21:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.537 18:21:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.796 18:21:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: --dhchap-ctrl-secret DHHC-1:01:MmJlNjg1NjdjOWVjZTIxOTdjYjA1ZWFiNGQ4ZTU4NmOF4rQ8: 00:16:05.796 18:21:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: --dhchap-ctrl-secret DHHC-1:01:MmJlNjg1NjdjOWVjZTIxOTdjYjA1ZWFiNGQ4ZTU4NmOF4rQ8: 00:16:06.733 18:21:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.991 18:21:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:16:06.991 18:21:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.991 18:21:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.991 18:21:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.991 18:21:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.991 18:21:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:06.991 18:21:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:07.250 18:21:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:07.250 18:21:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.250 18:21:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:07.250 18:21:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:07.250 18:21:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:07.250 18:21:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.250 18:21:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key3 00:16:07.250 18:21:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.250 18:21:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.250 18:21:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.250 18:21:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:07.250 18:21:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:07.250 18:21:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:07.509 00:16:07.509 18:21:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.509 18:21:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.509 18:21:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.767 18:21:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.767 18:21:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.767 18:21:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.767 18:21:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.767 18:21:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.767 18:21:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.767 { 00:16:07.767 "cntlid": 111, 00:16:07.767 "qid": 0, 00:16:07.767 "state": "enabled", 00:16:07.767 "thread": "nvmf_tgt_poll_group_000", 00:16:07.767 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:16:07.767 "listen_address": { 00:16:07.767 "trtype": "RDMA", 00:16:07.767 "adrfam": "IPv4", 00:16:07.767 "traddr": "192.168.100.8", 00:16:07.767 "trsvcid": "4420" 00:16:07.767 }, 00:16:07.767 "peer_address": { 00:16:07.767 "trtype": "RDMA", 00:16:07.767 "adrfam": "IPv4", 00:16:07.767 "traddr": "192.168.100.8", 00:16:07.767 "trsvcid": "46736" 00:16:07.767 }, 00:16:07.767 "auth": { 00:16:07.767 "state": "completed", 00:16:07.767 "digest": "sha512", 00:16:07.767 "dhgroup": "ffdhe2048" 00:16:07.767 } 00:16:07.767 } 00:16:07.767 ]' 00:16:07.767 18:21:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.767 18:21:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:07.767 18:21:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.767 18:21:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:07.767 18:21:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.767 18:21:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.767 18:21:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.767 18:21:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.026 18:21:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmM4Zjk3NjUxM2UzMWFlZWNmZjBiYjc3YTAyOWZmMTQ5ZGE1MmU1NjYxNWIwMDlhNGJhNWJkNzIxNjZiMjE0ZfUxUC4=: 00:16:08.026 18:21:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmM4Zjk3NjUxM2UzMWFlZWNmZjBiYjc3YTAyOWZmMTQ5ZGE1MmU1NjYxNWIwMDlhNGJhNWJkNzIxNjZiMjE0ZfUxUC4=: 00:16:09.404 18:21:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.404 18:21:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:16:09.404 18:21:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.404 18:21:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.404 18:21:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.404 18:21:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:09.404 18:21:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.404 18:21:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:09.404 18:21:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:09.663 18:21:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:09.663 18:21:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.663 18:21:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:09.663 18:21:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:09.663 18:21:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:09.663 18:21:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.663 18:21:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.663 18:21:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.663 18:21:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.663 18:21:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.663 18:21:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.663 18:21:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.663 18:21:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.921 00:16:09.921 18:21:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.921 18:21:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.921 18:21:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.921 18:21:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.921 18:21:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.921 18:21:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.921 18:21:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.921 18:21:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.921 18:21:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.921 { 00:16:09.921 "cntlid": 113, 00:16:09.921 "qid": 0, 00:16:09.921 "state": "enabled", 00:16:09.921 "thread": "nvmf_tgt_poll_group_000", 00:16:09.921 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:16:09.921 "listen_address": { 00:16:09.921 "trtype": "RDMA", 00:16:09.921 "adrfam": "IPv4", 00:16:09.921 "traddr": "192.168.100.8", 00:16:09.921 "trsvcid": "4420" 00:16:09.921 }, 00:16:09.921 "peer_address": { 00:16:09.921 "trtype": "RDMA", 00:16:09.921 "adrfam": "IPv4", 00:16:09.921 "traddr": "192.168.100.8", 00:16:09.921 "trsvcid": "44201" 00:16:09.921 }, 00:16:09.921 "auth": { 00:16:09.921 "state": "completed", 00:16:09.921 "digest": "sha512", 00:16:09.921 "dhgroup": "ffdhe3072" 00:16:09.921 } 00:16:09.921 } 00:16:09.921 ]' 00:16:09.921 18:21:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.181 18:21:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:10.181 18:21:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.181 18:21:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:10.181 18:21:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.181 18:21:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.181 18:21:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.181 18:21:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.440 18:21:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjgwZDQ0ZDQ4OTZjYzRkODVkNTNiZmFlOTBlYzUwZGIzM2M2YWU5Yzk2ZmNhNTU4xFK5dw==: --dhchap-ctrl-secret DHHC-1:03:OTRiYjAzZDIwNmEwM2NmZDk0ZjQyMTg1ZTRlZjgyNjZjNWVlNTU1Y2ZiMTllNTJmOWUwMzczNDg4ZjBlODA0MwRnIXw=: 00:16:10.440 18:21:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjgwZDQ0ZDQ4OTZjYzRkODVkNTNiZmFlOTBlYzUwZGIzM2M2YWU5Yzk2ZmNhNTU4xFK5dw==: --dhchap-ctrl-secret DHHC-1:03:OTRiYjAzZDIwNmEwM2NmZDk0ZjQyMTg1ZTRlZjgyNjZjNWVlNTU1Y2ZiMTllNTJmOWUwMzczNDg4ZjBlODA0MwRnIXw=: 00:16:11.373 18:21:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.632 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.632 18:21:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:16:11.632 18:21:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.632 18:21:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.632 18:21:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.632 18:21:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.632 18:21:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:11.632 18:21:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:11.891 18:21:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:11.891 18:21:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.891 18:21:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:11.891 18:21:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:11.891 18:21:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:11.891 18:21:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.891 18:21:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.891 18:21:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.891 18:21:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.891 18:21:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.891 18:21:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.891 18:21:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.891 18:21:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.150 00:16:12.150 18:21:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.150 18:21:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.150 18:21:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.409 18:21:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.410 18:21:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.410 18:21:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.410 18:21:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.410 18:21:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.410 18:21:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.410 { 00:16:12.410 "cntlid": 115, 00:16:12.410 "qid": 0, 00:16:12.410 "state": "enabled", 00:16:12.410 "thread": "nvmf_tgt_poll_group_000", 00:16:12.410 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:16:12.410 "listen_address": { 00:16:12.410 "trtype": "RDMA", 00:16:12.410 "adrfam": "IPv4", 00:16:12.410 "traddr": "192.168.100.8", 00:16:12.410 "trsvcid": "4420" 00:16:12.410 }, 00:16:12.410 "peer_address": { 00:16:12.410 "trtype": "RDMA", 00:16:12.410 "adrfam": "IPv4", 00:16:12.410 "traddr": "192.168.100.8", 00:16:12.410 "trsvcid": "36872" 00:16:12.410 }, 00:16:12.410 "auth": { 00:16:12.410 "state": "completed", 00:16:12.410 "digest": "sha512", 00:16:12.410 "dhgroup": "ffdhe3072" 00:16:12.410 } 00:16:12.410 } 00:16:12.410 ]' 00:16:12.410 18:21:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.410 18:21:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:12.410 18:21:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.410 18:21:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:12.410 18:21:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.410 18:21:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.410 18:21:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.410 18:21:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.670 18:21:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjBjNzBkZGRiNWY4ZTkyOWJmNWU4NDA2NDgyM2Q3M2a16XEW: --dhchap-ctrl-secret DHHC-1:02:NzcyOTE3NjdkODIxNDkzZDZlMzgzODU5YjUxZTE4OGFkYjgzNTVjNGU1NzY3YzMwpTruLA==: 00:16:12.671 18:21:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjBjNzBkZGRiNWY4ZTkyOWJmNWU4NDA2NDgyM2Q3M2a16XEW: --dhchap-ctrl-secret DHHC-1:02:NzcyOTE3NjdkODIxNDkzZDZlMzgzODU5YjUxZTE4OGFkYjgzNTVjNGU1NzY3YzMwpTruLA==: 00:16:14.049 18:21:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.049 18:21:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:16:14.049 18:21:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.049 18:21:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.049 18:21:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.049 18:21:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.049 18:21:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:14.049 18:21:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:14.308 18:21:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:14.308 18:21:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.308 18:21:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:14.308 18:21:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:14.308 18:21:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:14.308 18:21:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.308 18:21:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.308 18:21:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.308 18:21:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.308 18:21:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.308 18:21:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.309 18:21:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.309 18:21:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.568 00:16:14.568 18:21:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.568 18:21:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.568 18:21:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.568 18:21:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.568 18:21:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.568 18:21:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.568 18:21:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.568 18:21:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.568 18:21:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.568 { 00:16:14.568 "cntlid": 117, 00:16:14.568 "qid": 0, 00:16:14.568 "state": "enabled", 00:16:14.568 "thread": "nvmf_tgt_poll_group_000", 00:16:14.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:16:14.568 "listen_address": { 00:16:14.568 "trtype": "RDMA", 00:16:14.568 "adrfam": "IPv4", 00:16:14.568 "traddr": "192.168.100.8", 00:16:14.568 "trsvcid": "4420" 00:16:14.568 }, 00:16:14.568 "peer_address": { 00:16:14.568 "trtype": "RDMA", 00:16:14.568 "adrfam": "IPv4", 00:16:14.568 "traddr": "192.168.100.8", 00:16:14.568 "trsvcid": "59134" 00:16:14.568 }, 00:16:14.568 "auth": { 00:16:14.568 "state": "completed", 00:16:14.568 "digest": "sha512", 00:16:14.568 "dhgroup": "ffdhe3072" 00:16:14.568 } 00:16:14.568 } 00:16:14.568 ]' 00:16:14.827 18:21:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.827 18:21:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:14.827 18:21:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.827 18:21:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:14.827 18:21:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.827 18:21:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.827 18:21:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.827 18:21:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.086 18:21:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: --dhchap-ctrl-secret DHHC-1:01:MmJlNjg1NjdjOWVjZTIxOTdjYjA1ZWFiNGQ4ZTU4NmOF4rQ8: 00:16:15.086 18:21:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: --dhchap-ctrl-secret DHHC-1:01:MmJlNjg1NjdjOWVjZTIxOTdjYjA1ZWFiNGQ4ZTU4NmOF4rQ8: 00:16:16.028 18:21:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.286 18:21:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:16:16.286 18:21:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.286 18:21:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.286 18:21:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.286 18:21:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.287 18:21:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:16.287 18:21:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:16.546 18:21:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:16.546 18:21:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.546 18:21:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:16.546 18:21:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:16.546 18:21:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:16.546 18:21:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.546 18:21:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key3 00:16:16.546 18:21:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.546 18:21:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.546 18:21:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.546 18:21:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:16.546 18:21:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:16.546 18:21:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:16.805 00:16:16.805 18:21:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.805 18:21:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.805 18:21:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.065 18:21:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.065 18:21:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.065 18:21:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.065 18:21:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.065 18:21:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.065 18:21:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.065 { 00:16:17.065 "cntlid": 119, 00:16:17.065 "qid": 0, 00:16:17.065 "state": "enabled", 00:16:17.065 "thread": "nvmf_tgt_poll_group_000", 00:16:17.065 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:16:17.065 "listen_address": { 00:16:17.065 "trtype": "RDMA", 00:16:17.065 "adrfam": "IPv4", 00:16:17.065 "traddr": "192.168.100.8", 00:16:17.065 "trsvcid": "4420" 00:16:17.065 }, 00:16:17.065 "peer_address": { 00:16:17.065 "trtype": "RDMA", 00:16:17.065 "adrfam": "IPv4", 00:16:17.065 "traddr": "192.168.100.8", 00:16:17.065 "trsvcid": "55445" 00:16:17.065 }, 00:16:17.065 "auth": { 00:16:17.065 "state": "completed", 00:16:17.065 "digest": "sha512", 00:16:17.065 "dhgroup": "ffdhe3072" 00:16:17.065 } 00:16:17.065 } 00:16:17.065 ]' 00:16:17.065 18:21:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.065 18:21:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:17.065 18:21:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.065 18:21:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:17.065 18:21:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.065 18:21:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.065 18:21:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.065 18:21:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.323 18:21:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmM4Zjk3NjUxM2UzMWFlZWNmZjBiYjc3YTAyOWZmMTQ5ZGE1MmU1NjYxNWIwMDlhNGJhNWJkNzIxNjZiMjE0ZfUxUC4=: 00:16:17.323 18:21:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmM4Zjk3NjUxM2UzMWFlZWNmZjBiYjc3YTAyOWZmMTQ5ZGE1MmU1NjYxNWIwMDlhNGJhNWJkNzIxNjZiMjE0ZfUxUC4=: 00:16:18.701 18:21:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.701 18:21:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:16:18.701 18:21:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.701 18:21:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.701 18:21:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.701 18:21:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:18.701 18:21:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.701 18:21:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:18.701 18:21:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:18.701 18:21:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:18.701 18:21:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.701 18:21:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:18.701 18:21:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:18.701 18:21:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:18.701 18:21:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.701 18:21:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.701 18:21:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.701 18:21:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.701 18:21:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.701 18:21:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.701 18:21:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.701 18:21:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.270 00:16:19.270 18:21:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.270 18:21:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.270 18:21:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.270 18:21:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.270 18:21:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.270 18:21:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.270 18:21:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.270 18:21:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.270 18:21:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.270 { 00:16:19.270 "cntlid": 121, 00:16:19.270 "qid": 0, 00:16:19.270 "state": "enabled", 00:16:19.270 "thread": "nvmf_tgt_poll_group_000", 00:16:19.270 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:16:19.270 "listen_address": { 00:16:19.270 "trtype": "RDMA", 00:16:19.270 "adrfam": "IPv4", 00:16:19.270 "traddr": "192.168.100.8", 00:16:19.270 "trsvcid": "4420" 00:16:19.270 }, 00:16:19.270 "peer_address": { 00:16:19.270 "trtype": "RDMA", 00:16:19.270 "adrfam": "IPv4", 00:16:19.270 "traddr": "192.168.100.8", 00:16:19.270 "trsvcid": "55851" 00:16:19.270 }, 00:16:19.270 "auth": { 00:16:19.270 "state": "completed", 00:16:19.270 "digest": "sha512", 00:16:19.270 "dhgroup": "ffdhe4096" 00:16:19.270 } 00:16:19.270 } 00:16:19.270 ]' 00:16:19.270 18:21:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.270 18:21:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:19.270 18:21:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.529 18:21:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:19.529 18:21:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.529 18:21:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.529 18:21:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.529 18:21:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.788 18:21:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjgwZDQ0ZDQ4OTZjYzRkODVkNTNiZmFlOTBlYzUwZGIzM2M2YWU5Yzk2ZmNhNTU4xFK5dw==: --dhchap-ctrl-secret DHHC-1:03:OTRiYjAzZDIwNmEwM2NmZDk0ZjQyMTg1ZTRlZjgyNjZjNWVlNTU1Y2ZiMTllNTJmOWUwMzczNDg4ZjBlODA0MwRnIXw=: 00:16:19.788 18:21:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjgwZDQ0ZDQ4OTZjYzRkODVkNTNiZmFlOTBlYzUwZGIzM2M2YWU5Yzk2ZmNhNTU4xFK5dw==: --dhchap-ctrl-secret DHHC-1:03:OTRiYjAzZDIwNmEwM2NmZDk0ZjQyMTg1ZTRlZjgyNjZjNWVlNTU1Y2ZiMTllNTJmOWUwMzczNDg4ZjBlODA0MwRnIXw=: 00:16:20.725 18:21:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.984 18:21:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:16:20.984 18:21:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.984 18:21:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.984 18:21:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.984 18:21:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.984 18:21:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:20.984 18:21:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:21.243 18:21:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:21.243 18:21:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.243 18:21:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:21.243 18:21:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:21.243 18:21:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:21.243 18:21:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.243 18:21:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.243 18:21:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.243 18:21:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.243 18:21:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.243 18:21:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.243 18:21:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.243 18:21:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.502 00:16:21.502 18:21:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.503 18:21:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.503 18:21:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.761 18:21:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.761 18:21:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.761 18:21:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.761 18:21:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.761 18:21:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.761 18:21:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.761 { 00:16:21.761 "cntlid": 123, 00:16:21.761 "qid": 0, 00:16:21.761 "state": "enabled", 00:16:21.761 "thread": "nvmf_tgt_poll_group_000", 00:16:21.761 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:16:21.761 "listen_address": { 00:16:21.761 "trtype": "RDMA", 00:16:21.761 "adrfam": "IPv4", 00:16:21.761 "traddr": "192.168.100.8", 00:16:21.761 "trsvcid": "4420" 00:16:21.761 }, 00:16:21.761 "peer_address": { 00:16:21.761 "trtype": "RDMA", 00:16:21.761 "adrfam": "IPv4", 00:16:21.762 "traddr": "192.168.100.8", 00:16:21.762 "trsvcid": "33566" 00:16:21.762 }, 00:16:21.762 "auth": { 00:16:21.762 "state": "completed", 00:16:21.762 "digest": "sha512", 00:16:21.762 "dhgroup": "ffdhe4096" 00:16:21.762 } 00:16:21.762 } 00:16:21.762 ]' 00:16:21.762 18:21:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.762 18:21:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:21.762 18:21:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.762 18:21:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:21.762 18:21:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.762 18:21:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.762 18:21:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.762 18:21:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.020 18:21:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjBjNzBkZGRiNWY4ZTkyOWJmNWU4NDA2NDgyM2Q3M2a16XEW: --dhchap-ctrl-secret DHHC-1:02:NzcyOTE3NjdkODIxNDkzZDZlMzgzODU5YjUxZTE4OGFkYjgzNTVjNGU1NzY3YzMwpTruLA==: 00:16:22.020 18:21:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjBjNzBkZGRiNWY4ZTkyOWJmNWU4NDA2NDgyM2Q3M2a16XEW: --dhchap-ctrl-secret DHHC-1:02:NzcyOTE3NjdkODIxNDkzZDZlMzgzODU5YjUxZTE4OGFkYjgzNTVjNGU1NzY3YzMwpTruLA==: 00:16:23.404 18:21:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.404 18:21:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:16:23.405 18:21:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.405 18:21:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.405 18:21:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.405 18:21:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.405 18:21:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:23.405 18:21:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:23.664 18:21:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:23.664 18:21:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.664 18:21:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:23.664 18:21:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:23.664 18:21:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:23.664 18:21:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.664 18:21:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.664 18:21:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.664 18:21:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.664 18:21:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.664 18:21:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.664 18:21:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.664 18:21:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.924 00:16:23.924 18:21:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.924 18:21:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.924 18:21:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.182 18:21:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.182 18:21:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.182 18:21:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.182 18:21:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.182 18:21:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.182 18:21:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.182 { 00:16:24.182 "cntlid": 125, 00:16:24.182 "qid": 0, 00:16:24.182 "state": "enabled", 00:16:24.182 "thread": "nvmf_tgt_poll_group_000", 00:16:24.182 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:16:24.182 "listen_address": { 00:16:24.183 "trtype": "RDMA", 00:16:24.183 "adrfam": "IPv4", 00:16:24.183 "traddr": "192.168.100.8", 00:16:24.183 "trsvcid": "4420" 00:16:24.183 }, 00:16:24.183 "peer_address": { 00:16:24.183 "trtype": "RDMA", 00:16:24.183 "adrfam": "IPv4", 00:16:24.183 "traddr": "192.168.100.8", 00:16:24.183 "trsvcid": "36502" 00:16:24.183 }, 00:16:24.183 "auth": { 00:16:24.183 "state": "completed", 00:16:24.183 "digest": "sha512", 00:16:24.183 "dhgroup": "ffdhe4096" 00:16:24.183 } 00:16:24.183 } 00:16:24.183 ]' 00:16:24.183 18:21:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.183 18:21:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:24.183 18:21:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.183 18:21:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:24.183 18:21:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.183 18:21:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.183 18:21:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.183 18:21:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.441 18:21:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: --dhchap-ctrl-secret DHHC-1:01:MmJlNjg1NjdjOWVjZTIxOTdjYjA1ZWFiNGQ4ZTU4NmOF4rQ8: 00:16:24.442 18:21:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: --dhchap-ctrl-secret DHHC-1:01:MmJlNjg1NjdjOWVjZTIxOTdjYjA1ZWFiNGQ4ZTU4NmOF4rQ8: 00:16:25.820 18:21:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.820 18:21:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:16:25.820 18:21:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.820 18:21:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.820 18:21:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.820 18:21:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.820 18:21:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:25.820 18:21:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:25.820 18:21:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:16:25.820 18:21:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.820 18:21:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:25.820 18:21:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:25.820 18:21:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:25.820 18:21:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.820 18:21:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key3 00:16:25.820 18:21:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.820 18:21:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.820 18:21:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.820 18:21:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:25.820 18:21:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:25.820 18:21:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:26.082 00:16:26.341 18:21:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.341 18:21:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.341 18:21:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.341 18:21:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.341 18:21:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.341 18:21:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.341 18:21:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.341 18:21:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.341 18:21:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.341 { 00:16:26.341 "cntlid": 127, 00:16:26.341 "qid": 0, 00:16:26.341 "state": "enabled", 00:16:26.341 "thread": "nvmf_tgt_poll_group_000", 00:16:26.341 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:16:26.341 "listen_address": { 00:16:26.341 "trtype": "RDMA", 00:16:26.341 "adrfam": "IPv4", 00:16:26.341 "traddr": "192.168.100.8", 00:16:26.341 "trsvcid": "4420" 00:16:26.341 }, 00:16:26.341 "peer_address": { 00:16:26.341 "trtype": "RDMA", 00:16:26.341 "adrfam": "IPv4", 00:16:26.341 "traddr": "192.168.100.8", 00:16:26.341 "trsvcid": "59627" 00:16:26.341 }, 00:16:26.341 "auth": { 00:16:26.341 "state": "completed", 00:16:26.341 "digest": "sha512", 00:16:26.341 "dhgroup": "ffdhe4096" 00:16:26.341 } 00:16:26.341 } 00:16:26.341 ]' 00:16:26.342 18:21:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.601 18:21:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:26.601 18:21:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.601 18:21:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:26.601 18:21:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.601 18:21:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.601 18:21:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.601 18:21:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.860 18:21:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmM4Zjk3NjUxM2UzMWFlZWNmZjBiYjc3YTAyOWZmMTQ5ZGE1MmU1NjYxNWIwMDlhNGJhNWJkNzIxNjZiMjE0ZfUxUC4=: 00:16:26.860 18:21:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmM4Zjk3NjUxM2UzMWFlZWNmZjBiYjc3YTAyOWZmMTQ5ZGE1MmU1NjYxNWIwMDlhNGJhNWJkNzIxNjZiMjE0ZfUxUC4=: 00:16:27.795 18:21:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.054 18:21:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:16:28.054 18:21:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.054 18:21:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.054 18:21:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.054 18:21:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:28.054 18:21:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.054 18:21:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:28.054 18:21:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:28.313 18:21:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:16:28.313 18:21:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.313 18:21:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:28.313 18:21:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:28.313 18:21:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:28.313 18:21:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.313 18:21:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.313 18:21:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.313 18:21:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.313 18:21:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.314 18:21:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.314 18:21:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.314 18:21:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.573 00:16:28.573 18:21:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.573 18:21:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.573 18:21:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.832 18:21:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.832 18:21:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.832 18:21:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.832 18:21:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.832 18:21:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.832 18:21:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.832 { 00:16:28.832 "cntlid": 129, 00:16:28.832 "qid": 0, 00:16:28.832 "state": "enabled", 00:16:28.832 "thread": "nvmf_tgt_poll_group_000", 00:16:28.832 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:16:28.832 "listen_address": { 00:16:28.832 "trtype": "RDMA", 00:16:28.832 "adrfam": "IPv4", 00:16:28.832 "traddr": "192.168.100.8", 00:16:28.832 "trsvcid": "4420" 00:16:28.832 }, 00:16:28.832 "peer_address": { 00:16:28.832 "trtype": "RDMA", 00:16:28.832 "adrfam": "IPv4", 00:16:28.832 "traddr": "192.168.100.8", 00:16:28.832 "trsvcid": "37354" 00:16:28.832 }, 00:16:28.832 "auth": { 00:16:28.832 "state": "completed", 00:16:28.832 "digest": "sha512", 00:16:28.832 "dhgroup": "ffdhe6144" 00:16:28.832 } 00:16:28.832 } 00:16:28.832 ]' 00:16:28.832 18:21:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.832 18:21:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:28.832 18:21:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.832 18:21:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:28.832 18:21:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.832 18:21:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.832 18:21:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.832 18:21:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.091 18:21:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjgwZDQ0ZDQ4OTZjYzRkODVkNTNiZmFlOTBlYzUwZGIzM2M2YWU5Yzk2ZmNhNTU4xFK5dw==: --dhchap-ctrl-secret DHHC-1:03:OTRiYjAzZDIwNmEwM2NmZDk0ZjQyMTg1ZTRlZjgyNjZjNWVlNTU1Y2ZiMTllNTJmOWUwMzczNDg4ZjBlODA0MwRnIXw=: 00:16:29.091 18:21:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjgwZDQ0ZDQ4OTZjYzRkODVkNTNiZmFlOTBlYzUwZGIzM2M2YWU5Yzk2ZmNhNTU4xFK5dw==: --dhchap-ctrl-secret DHHC-1:03:OTRiYjAzZDIwNmEwM2NmZDk0ZjQyMTg1ZTRlZjgyNjZjNWVlNTU1Y2ZiMTllNTJmOWUwMzczNDg4ZjBlODA0MwRnIXw=: 00:16:30.470 18:21:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.470 18:21:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:16:30.470 18:21:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.470 18:21:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.470 18:21:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.470 18:21:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.470 18:21:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:30.470 18:21:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:30.470 18:21:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:16:30.470 18:21:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.470 18:21:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:30.470 18:21:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:30.470 18:21:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:30.470 18:21:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.470 18:21:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.470 18:21:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.470 18:21:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.729 18:21:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.729 18:21:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.729 18:21:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.729 18:21:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.987 00:16:30.988 18:21:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.988 18:21:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.988 18:21:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.348 18:21:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.348 18:21:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.348 18:21:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.348 18:21:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.348 18:21:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.348 18:21:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.348 { 00:16:31.348 "cntlid": 131, 00:16:31.348 "qid": 0, 00:16:31.348 "state": "enabled", 00:16:31.348 "thread": "nvmf_tgt_poll_group_000", 00:16:31.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:16:31.348 "listen_address": { 00:16:31.348 "trtype": "RDMA", 00:16:31.348 "adrfam": "IPv4", 00:16:31.348 "traddr": "192.168.100.8", 00:16:31.348 "trsvcid": "4420" 00:16:31.348 }, 00:16:31.348 "peer_address": { 00:16:31.348 "trtype": "RDMA", 00:16:31.348 "adrfam": "IPv4", 00:16:31.348 "traddr": "192.168.100.8", 00:16:31.348 "trsvcid": "54101" 00:16:31.348 }, 00:16:31.348 "auth": { 00:16:31.348 "state": "completed", 00:16:31.349 "digest": "sha512", 00:16:31.349 "dhgroup": "ffdhe6144" 00:16:31.349 } 00:16:31.349 } 00:16:31.349 ]' 00:16:31.349 18:21:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.349 18:21:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:31.349 18:21:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.349 18:21:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:31.349 18:21:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.349 18:21:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.349 18:21:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.349 18:21:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.615 18:21:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjBjNzBkZGRiNWY4ZTkyOWJmNWU4NDA2NDgyM2Q3M2a16XEW: --dhchap-ctrl-secret DHHC-1:02:NzcyOTE3NjdkODIxNDkzZDZlMzgzODU5YjUxZTE4OGFkYjgzNTVjNGU1NzY3YzMwpTruLA==: 00:16:31.615 18:21:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjBjNzBkZGRiNWY4ZTkyOWJmNWU4NDA2NDgyM2Q3M2a16XEW: --dhchap-ctrl-secret DHHC-1:02:NzcyOTE3NjdkODIxNDkzZDZlMzgzODU5YjUxZTE4OGFkYjgzNTVjNGU1NzY3YzMwpTruLA==: 00:16:32.552 18:21:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.812 18:21:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:16:32.812 18:21:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.812 18:21:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.812 18:21:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.812 18:21:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.812 18:21:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:32.812 18:21:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:33.071 18:21:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:16:33.071 18:21:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.071 18:21:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:33.071 18:21:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:33.071 18:21:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:33.071 18:21:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.071 18:21:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.071 18:21:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.071 18:21:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.071 18:21:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.071 18:21:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.071 18:21:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.071 18:21:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.331 00:16:33.331 18:21:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.331 18:21:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.331 18:21:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.589 18:21:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.589 18:21:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.589 18:21:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.589 18:21:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.589 18:21:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.589 18:21:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.589 { 00:16:33.589 "cntlid": 133, 00:16:33.589 "qid": 0, 00:16:33.589 "state": "enabled", 00:16:33.589 "thread": "nvmf_tgt_poll_group_000", 00:16:33.589 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:16:33.589 "listen_address": { 00:16:33.589 "trtype": "RDMA", 00:16:33.589 "adrfam": "IPv4", 00:16:33.589 "traddr": "192.168.100.8", 00:16:33.589 "trsvcid": "4420" 00:16:33.589 }, 00:16:33.589 "peer_address": { 00:16:33.589 "trtype": "RDMA", 00:16:33.589 "adrfam": "IPv4", 00:16:33.589 "traddr": "192.168.100.8", 00:16:33.589 "trsvcid": "34654" 00:16:33.589 }, 00:16:33.589 "auth": { 00:16:33.589 "state": "completed", 00:16:33.589 "digest": "sha512", 00:16:33.589 "dhgroup": "ffdhe6144" 00:16:33.589 } 00:16:33.589 } 00:16:33.589 ]' 00:16:33.589 18:21:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.589 18:21:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:33.589 18:21:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.589 18:21:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:33.589 18:21:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.848 18:21:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.848 18:21:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.848 18:21:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.848 18:21:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: --dhchap-ctrl-secret DHHC-1:01:MmJlNjg1NjdjOWVjZTIxOTdjYjA1ZWFiNGQ4ZTU4NmOF4rQ8: 00:16:33.848 18:21:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: --dhchap-ctrl-secret DHHC-1:01:MmJlNjg1NjdjOWVjZTIxOTdjYjA1ZWFiNGQ4ZTU4NmOF4rQ8: 00:16:35.227 18:21:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.227 18:21:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:16:35.227 18:21:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.227 18:21:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.227 18:21:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.227 18:21:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.227 18:21:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:35.227 18:21:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:35.486 18:21:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:16:35.486 18:21:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.486 18:21:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:35.486 18:21:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:35.486 18:21:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:35.486 18:21:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.486 18:21:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key3 00:16:35.486 18:21:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.486 18:21:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.486 18:21:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.486 18:21:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:35.486 18:21:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:35.486 18:21:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:35.746 00:16:35.746 18:21:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.746 18:21:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.746 18:21:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.005 18:21:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.005 18:21:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.005 18:21:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.005 18:21:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.005 18:21:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.005 18:21:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.005 { 00:16:36.005 "cntlid": 135, 00:16:36.005 "qid": 0, 00:16:36.005 "state": "enabled", 00:16:36.005 "thread": "nvmf_tgt_poll_group_000", 00:16:36.005 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:16:36.005 "listen_address": { 00:16:36.005 "trtype": "RDMA", 00:16:36.005 "adrfam": "IPv4", 00:16:36.005 "traddr": "192.168.100.8", 00:16:36.005 "trsvcid": "4420" 00:16:36.005 }, 00:16:36.005 "peer_address": { 00:16:36.005 "trtype": "RDMA", 00:16:36.005 "adrfam": "IPv4", 00:16:36.005 "traddr": "192.168.100.8", 00:16:36.005 "trsvcid": "60764" 00:16:36.005 }, 00:16:36.005 "auth": { 00:16:36.005 "state": "completed", 00:16:36.005 "digest": "sha512", 00:16:36.005 "dhgroup": "ffdhe6144" 00:16:36.005 } 00:16:36.005 } 00:16:36.005 ]' 00:16:36.005 18:21:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.005 18:21:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.005 18:21:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.006 18:21:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:36.006 18:21:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.265 18:21:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.265 18:21:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.265 18:21:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.265 18:21:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmM4Zjk3NjUxM2UzMWFlZWNmZjBiYjc3YTAyOWZmMTQ5ZGE1MmU1NjYxNWIwMDlhNGJhNWJkNzIxNjZiMjE0ZfUxUC4=: 00:16:36.265 18:21:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmM4Zjk3NjUxM2UzMWFlZWNmZjBiYjc3YTAyOWZmMTQ5ZGE1MmU1NjYxNWIwMDlhNGJhNWJkNzIxNjZiMjE0ZfUxUC4=: 00:16:37.645 18:21:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.645 18:21:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:16:37.645 18:21:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.645 18:21:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.645 18:21:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.645 18:21:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:37.645 18:21:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.645 18:21:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:37.645 18:21:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:37.904 18:21:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:16:37.905 18:21:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.905 18:21:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:37.905 18:21:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:37.905 18:21:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:37.905 18:21:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.905 18:21:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.905 18:21:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.905 18:21:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.905 18:21:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.905 18:21:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.905 18:21:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.905 18:21:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.474 00:16:38.474 18:21:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.474 18:21:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.474 18:21:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.474 18:21:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.474 18:21:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.474 18:21:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.474 18:21:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.474 18:21:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.474 18:21:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.474 { 00:16:38.474 "cntlid": 137, 00:16:38.474 "qid": 0, 00:16:38.474 "state": "enabled", 00:16:38.474 "thread": "nvmf_tgt_poll_group_000", 00:16:38.474 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:16:38.474 "listen_address": { 00:16:38.474 "trtype": "RDMA", 00:16:38.474 "adrfam": "IPv4", 00:16:38.474 "traddr": "192.168.100.8", 00:16:38.474 "trsvcid": "4420" 00:16:38.474 }, 00:16:38.474 "peer_address": { 00:16:38.474 "trtype": "RDMA", 00:16:38.474 "adrfam": "IPv4", 00:16:38.474 "traddr": "192.168.100.8", 00:16:38.474 "trsvcid": "48152" 00:16:38.474 }, 00:16:38.474 "auth": { 00:16:38.474 "state": "completed", 00:16:38.474 "digest": "sha512", 00:16:38.474 "dhgroup": "ffdhe8192" 00:16:38.474 } 00:16:38.474 } 00:16:38.474 ]' 00:16:38.474 18:21:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.733 18:21:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:38.733 18:21:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.733 18:21:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:38.733 18:21:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.733 18:21:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.733 18:21:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.733 18:21:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.992 18:21:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjgwZDQ0ZDQ4OTZjYzRkODVkNTNiZmFlOTBlYzUwZGIzM2M2YWU5Yzk2ZmNhNTU4xFK5dw==: --dhchap-ctrl-secret DHHC-1:03:OTRiYjAzZDIwNmEwM2NmZDk0ZjQyMTg1ZTRlZjgyNjZjNWVlNTU1Y2ZiMTllNTJmOWUwMzczNDg4ZjBlODA0MwRnIXw=: 00:16:38.992 18:21:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjgwZDQ0ZDQ4OTZjYzRkODVkNTNiZmFlOTBlYzUwZGIzM2M2YWU5Yzk2ZmNhNTU4xFK5dw==: --dhchap-ctrl-secret DHHC-1:03:OTRiYjAzZDIwNmEwM2NmZDk0ZjQyMTg1ZTRlZjgyNjZjNWVlNTU1Y2ZiMTllNTJmOWUwMzczNDg4ZjBlODA0MwRnIXw=: 00:16:39.928 18:21:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.186 18:21:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:16:40.186 18:21:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.186 18:21:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.186 18:21:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.186 18:21:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.186 18:21:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:40.186 18:21:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:40.446 18:21:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:16:40.446 18:21:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.446 18:21:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:40.446 18:21:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:40.446 18:21:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:40.446 18:21:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.446 18:21:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.446 18:21:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.446 18:21:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.446 18:21:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.446 18:21:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.446 18:21:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.446 18:21:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.015 00:16:41.015 18:21:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.015 18:21:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.015 18:21:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.015 18:21:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.015 18:21:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.015 18:21:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.015 18:21:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.015 18:21:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.015 18:21:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.015 { 00:16:41.015 "cntlid": 139, 00:16:41.015 "qid": 0, 00:16:41.015 "state": "enabled", 00:16:41.015 "thread": "nvmf_tgt_poll_group_000", 00:16:41.015 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:16:41.015 "listen_address": { 00:16:41.015 "trtype": "RDMA", 00:16:41.015 "adrfam": "IPv4", 00:16:41.015 "traddr": "192.168.100.8", 00:16:41.015 "trsvcid": "4420" 00:16:41.015 }, 00:16:41.015 "peer_address": { 00:16:41.015 "trtype": "RDMA", 00:16:41.015 "adrfam": "IPv4", 00:16:41.015 "traddr": "192.168.100.8", 00:16:41.015 "trsvcid": "37578" 00:16:41.015 }, 00:16:41.015 "auth": { 00:16:41.015 "state": "completed", 00:16:41.015 "digest": "sha512", 00:16:41.015 "dhgroup": "ffdhe8192" 00:16:41.015 } 00:16:41.015 } 00:16:41.015 ]' 00:16:41.015 18:21:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.275 18:21:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:41.275 18:21:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.275 18:21:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:41.275 18:21:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.275 18:21:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.275 18:21:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.275 18:21:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.534 18:21:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjBjNzBkZGRiNWY4ZTkyOWJmNWU4NDA2NDgyM2Q3M2a16XEW: --dhchap-ctrl-secret DHHC-1:02:NzcyOTE3NjdkODIxNDkzZDZlMzgzODU5YjUxZTE4OGFkYjgzNTVjNGU1NzY3YzMwpTruLA==: 00:16:41.534 18:21:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjBjNzBkZGRiNWY4ZTkyOWJmNWU4NDA2NDgyM2Q3M2a16XEW: --dhchap-ctrl-secret DHHC-1:02:NzcyOTE3NjdkODIxNDkzZDZlMzgzODU5YjUxZTE4OGFkYjgzNTVjNGU1NzY3YzMwpTruLA==: 00:16:42.472 18:21:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.731 18:21:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:16:42.731 18:21:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.731 18:21:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.731 18:21:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.731 18:21:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.731 18:21:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:42.731 18:21:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:42.990 18:21:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:16:42.990 18:21:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.990 18:21:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:42.990 18:21:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:42.990 18:21:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:42.990 18:21:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.990 18:21:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.990 18:21:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.990 18:21:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.990 18:21:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.990 18:21:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.990 18:21:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.990 18:21:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.559 00:16:43.559 18:21:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.559 18:21:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.559 18:21:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.559 18:21:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.559 18:21:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.559 18:21:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.559 18:21:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.559 18:21:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.559 18:21:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.559 { 00:16:43.559 "cntlid": 141, 00:16:43.559 "qid": 0, 00:16:43.559 "state": "enabled", 00:16:43.559 "thread": "nvmf_tgt_poll_group_000", 00:16:43.559 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:16:43.559 "listen_address": { 00:16:43.559 "trtype": "RDMA", 00:16:43.559 "adrfam": "IPv4", 00:16:43.559 "traddr": "192.168.100.8", 00:16:43.559 "trsvcid": "4420" 00:16:43.559 }, 00:16:43.559 "peer_address": { 00:16:43.559 "trtype": "RDMA", 00:16:43.559 "adrfam": "IPv4", 00:16:43.559 "traddr": "192.168.100.8", 00:16:43.559 "trsvcid": "33257" 00:16:43.559 }, 00:16:43.559 "auth": { 00:16:43.559 "state": "completed", 00:16:43.559 "digest": "sha512", 00:16:43.559 "dhgroup": "ffdhe8192" 00:16:43.559 } 00:16:43.559 } 00:16:43.559 ]' 00:16:43.559 18:21:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.819 18:21:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:43.819 18:21:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.819 18:21:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:43.819 18:21:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.819 18:21:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.819 18:21:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.819 18:21:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.078 18:21:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: --dhchap-ctrl-secret DHHC-1:01:MmJlNjg1NjdjOWVjZTIxOTdjYjA1ZWFiNGQ4ZTU4NmOF4rQ8: 00:16:44.078 18:21:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: --dhchap-ctrl-secret DHHC-1:01:MmJlNjg1NjdjOWVjZTIxOTdjYjA1ZWFiNGQ4ZTU4NmOF4rQ8: 00:16:45.016 18:21:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.280 18:21:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:16:45.280 18:21:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.280 18:21:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.280 18:21:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.280 18:21:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.280 18:21:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:45.280 18:21:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:45.539 18:21:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:16:45.539 18:21:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.539 18:21:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:45.539 18:21:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:45.539 18:21:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:45.539 18:21:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.539 18:21:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key3 00:16:45.539 18:21:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.539 18:21:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.539 18:21:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.539 18:21:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:45.539 18:21:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:45.539 18:21:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.108 00:16:46.108 18:21:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.108 18:21:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.108 18:21:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.108 18:21:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.367 18:21:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.367 18:21:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.367 18:21:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.367 18:21:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.367 18:21:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.367 { 00:16:46.367 "cntlid": 143, 00:16:46.367 "qid": 0, 00:16:46.367 "state": "enabled", 00:16:46.367 "thread": "nvmf_tgt_poll_group_000", 00:16:46.367 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:16:46.367 "listen_address": { 00:16:46.367 "trtype": "RDMA", 00:16:46.367 "adrfam": "IPv4", 00:16:46.367 "traddr": "192.168.100.8", 00:16:46.367 "trsvcid": "4420" 00:16:46.367 }, 00:16:46.367 "peer_address": { 00:16:46.367 "trtype": "RDMA", 00:16:46.367 "adrfam": "IPv4", 00:16:46.367 "traddr": "192.168.100.8", 00:16:46.367 "trsvcid": "49311" 00:16:46.367 }, 00:16:46.367 "auth": { 00:16:46.367 "state": "completed", 00:16:46.367 "digest": "sha512", 00:16:46.367 "dhgroup": "ffdhe8192" 00:16:46.367 } 00:16:46.367 } 00:16:46.367 ]' 00:16:46.367 18:21:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.367 18:21:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:46.367 18:21:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.367 18:21:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:46.367 18:21:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.367 18:21:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.367 18:21:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.367 18:21:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.626 18:21:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmM4Zjk3NjUxM2UzMWFlZWNmZjBiYjc3YTAyOWZmMTQ5ZGE1MmU1NjYxNWIwMDlhNGJhNWJkNzIxNjZiMjE0ZfUxUC4=: 00:16:46.626 18:21:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmM4Zjk3NjUxM2UzMWFlZWNmZjBiYjc3YTAyOWZmMTQ5ZGE1MmU1NjYxNWIwMDlhNGJhNWJkNzIxNjZiMjE0ZfUxUC4=: 00:16:47.567 18:22:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.826 18:22:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:16:47.826 18:22:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.826 18:22:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.826 18:22:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.826 18:22:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:47.826 18:22:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:16:47.826 18:22:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:47.826 18:22:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:47.826 18:22:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:47.826 18:22:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:48.086 18:22:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:16:48.086 18:22:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.086 18:22:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:48.086 18:22:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:48.086 18:22:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:48.086 18:22:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.086 18:22:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.086 18:22:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.086 18:22:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.086 18:22:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.086 18:22:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.086 18:22:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.086 18:22:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.655 00:16:48.655 18:22:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.655 18:22:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.655 18:22:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.655 18:22:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.655 18:22:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.655 18:22:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.655 18:22:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.914 18:22:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.914 18:22:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.914 { 00:16:48.914 "cntlid": 145, 00:16:48.914 "qid": 0, 00:16:48.914 "state": "enabled", 00:16:48.914 "thread": "nvmf_tgt_poll_group_000", 00:16:48.915 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:16:48.915 "listen_address": { 00:16:48.915 "trtype": "RDMA", 00:16:48.915 "adrfam": "IPv4", 00:16:48.915 "traddr": "192.168.100.8", 00:16:48.915 "trsvcid": "4420" 00:16:48.915 }, 00:16:48.915 "peer_address": { 00:16:48.915 "trtype": "RDMA", 00:16:48.915 "adrfam": "IPv4", 00:16:48.915 "traddr": "192.168.100.8", 00:16:48.915 "trsvcid": "52748" 00:16:48.915 }, 00:16:48.915 "auth": { 00:16:48.915 "state": "completed", 00:16:48.915 "digest": "sha512", 00:16:48.915 "dhgroup": "ffdhe8192" 00:16:48.915 } 00:16:48.915 } 00:16:48.915 ]' 00:16:48.915 18:22:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.915 18:22:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:48.915 18:22:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.915 18:22:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:48.915 18:22:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.915 18:22:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.915 18:22:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.915 18:22:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.174 18:22:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjgwZDQ0ZDQ4OTZjYzRkODVkNTNiZmFlOTBlYzUwZGIzM2M2YWU5Yzk2ZmNhNTU4xFK5dw==: --dhchap-ctrl-secret DHHC-1:03:OTRiYjAzZDIwNmEwM2NmZDk0ZjQyMTg1ZTRlZjgyNjZjNWVlNTU1Y2ZiMTllNTJmOWUwMzczNDg4ZjBlODA0MwRnIXw=: 00:16:49.174 18:22:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjgwZDQ0ZDQ4OTZjYzRkODVkNTNiZmFlOTBlYzUwZGIzM2M2YWU5Yzk2ZmNhNTU4xFK5dw==: --dhchap-ctrl-secret DHHC-1:03:OTRiYjAzZDIwNmEwM2NmZDk0ZjQyMTg1ZTRlZjgyNjZjNWVlNTU1Y2ZiMTllNTJmOWUwMzczNDg4ZjBlODA0MwRnIXw=: 00:16:50.111 18:22:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.371 18:22:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:16:50.371 18:22:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.371 18:22:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.371 18:22:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.371 18:22:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 00:16:50.371 18:22:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.371 18:22:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.371 18:22:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.371 18:22:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:16:50.371 18:22:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:50.371 18:22:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:16:50.371 18:22:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:16:50.371 18:22:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:50.371 18:22:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:16:50.371 18:22:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:50.371 18:22:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:16:50.371 18:22:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:50.371 18:22:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:50.940 request: 00:16:50.940 { 00:16:50.940 "name": "nvme0", 00:16:50.940 "trtype": "rdma", 00:16:50.940 "traddr": "192.168.100.8", 00:16:50.940 "adrfam": "ipv4", 00:16:50.940 "trsvcid": "4420", 00:16:50.940 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:50.940 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:16:50.940 "prchk_reftag": false, 00:16:50.940 "prchk_guard": false, 00:16:50.940 "hdgst": false, 00:16:50.940 "ddgst": false, 00:16:50.940 "dhchap_key": "key2", 00:16:50.940 "allow_unrecognized_csi": false, 00:16:50.940 "method": "bdev_nvme_attach_controller", 00:16:50.940 "req_id": 1 00:16:50.940 } 00:16:50.940 Got JSON-RPC error response 00:16:50.940 response: 00:16:50.940 { 00:16:50.940 "code": -5, 00:16:50.940 "message": "Input/output error" 00:16:50.940 } 00:16:50.940 18:22:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:50.940 18:22:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:50.940 18:22:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:50.940 18:22:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:50.940 18:22:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:16:50.940 18:22:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.940 18:22:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.940 18:22:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.940 18:22:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.940 18:22:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.940 18:22:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.940 18:22:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.940 18:22:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:50.940 18:22:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:50.940 18:22:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:50.940 18:22:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:16:50.940 18:22:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:50.940 18:22:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:16:50.940 18:22:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:50.940 18:22:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:50.940 18:22:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:50.940 18:22:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:51.508 request: 00:16:51.508 { 00:16:51.508 "name": "nvme0", 00:16:51.508 "trtype": "rdma", 00:16:51.508 "traddr": "192.168.100.8", 00:16:51.508 "adrfam": "ipv4", 00:16:51.508 "trsvcid": "4420", 00:16:51.508 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:51.508 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:16:51.508 "prchk_reftag": false, 00:16:51.508 "prchk_guard": false, 00:16:51.508 "hdgst": false, 00:16:51.508 "ddgst": false, 00:16:51.508 "dhchap_key": "key1", 00:16:51.508 "dhchap_ctrlr_key": "ckey2", 00:16:51.508 "allow_unrecognized_csi": false, 00:16:51.508 "method": "bdev_nvme_attach_controller", 00:16:51.508 "req_id": 1 00:16:51.508 } 00:16:51.508 Got JSON-RPC error response 00:16:51.508 response: 00:16:51.508 { 00:16:51.508 "code": -5, 00:16:51.508 "message": "Input/output error" 00:16:51.508 } 00:16:51.508 18:22:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:51.508 18:22:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:51.508 18:22:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:51.508 18:22:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:51.509 18:22:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:16:51.509 18:22:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.509 18:22:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.509 18:22:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.509 18:22:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 00:16:51.509 18:22:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.509 18:22:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.509 18:22:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.509 18:22:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.509 18:22:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:51.509 18:22:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.509 18:22:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:16:51.509 18:22:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:51.509 18:22:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:16:51.509 18:22:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:51.509 18:22:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.509 18:22:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.509 18:22:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.079 request: 00:16:52.079 { 00:16:52.079 "name": "nvme0", 00:16:52.079 "trtype": "rdma", 00:16:52.079 "traddr": "192.168.100.8", 00:16:52.079 "adrfam": "ipv4", 00:16:52.079 "trsvcid": "4420", 00:16:52.079 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:52.079 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:16:52.079 "prchk_reftag": false, 00:16:52.079 "prchk_guard": false, 00:16:52.079 "hdgst": false, 00:16:52.079 "ddgst": false, 00:16:52.079 "dhchap_key": "key1", 00:16:52.079 "dhchap_ctrlr_key": "ckey1", 00:16:52.079 "allow_unrecognized_csi": false, 00:16:52.079 "method": "bdev_nvme_attach_controller", 00:16:52.079 "req_id": 1 00:16:52.079 } 00:16:52.079 Got JSON-RPC error response 00:16:52.079 response: 00:16:52.079 { 00:16:52.079 "code": -5, 00:16:52.079 "message": "Input/output error" 00:16:52.079 } 00:16:52.079 18:22:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:52.079 18:22:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:52.079 18:22:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:52.079 18:22:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:52.079 18:22:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:16:52.079 18:22:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.079 18:22:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.079 18:22:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.079 18:22:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3401263 00:16:52.079 18:22:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3401263 ']' 00:16:52.079 18:22:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3401263 00:16:52.079 18:22:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:16:52.079 18:22:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:52.079 18:22:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3401263 00:16:52.079 18:22:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:52.079 18:22:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:52.079 18:22:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3401263' 00:16:52.079 killing process with pid 3401263 00:16:52.079 18:22:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3401263 00:16:52.079 18:22:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3401263 00:16:52.338 18:22:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:52.338 18:22:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:52.338 18:22:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:52.338 18:22:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.338 18:22:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=3423927 00:16:52.338 18:22:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:52.338 18:22:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 3423927 00:16:52.338 18:22:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3423927 ']' 00:16:52.338 18:22:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.338 18:22:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:52.338 18:22:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.338 18:22:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:52.338 18:22:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.276 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:53.276 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:53.276 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:53.276 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:53.276 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.276 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:53.276 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:53.276 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3423927 00:16:53.276 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3423927 ']' 00:16:53.276 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.276 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:53.276 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.276 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:53.276 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.536 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:53.536 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:53.536 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:16:53.536 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.536 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.536 null0 00:16:53.795 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.795 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:53.795 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.BHX 00:16:53.795 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.795 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.795 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.795 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.aKz ]] 00:16:53.795 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.aKz 00:16:53.795 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.795 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.795 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.795 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:53.795 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.biS 00:16:53.795 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.795 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.795 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.795 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.CAA ]] 00:16:53.795 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.CAA 00:16:53.795 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.795 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.795 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.795 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:53.795 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.XGg 00:16:53.795 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.795 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.796 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.796 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Vib ]] 00:16:53.796 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Vib 00:16:53.796 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.796 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.796 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.796 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:53.796 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Ro6 00:16:53.796 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.796 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.796 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.796 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:16:53.796 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:16:53.796 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.796 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:53.796 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:53.796 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:53.796 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.796 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key3 00:16:53.796 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.796 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.796 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.796 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:53.796 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:53.796 18:22:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.364 nvme0n1 00:16:54.623 18:22:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.623 18:22:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.623 18:22:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.623 18:22:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.623 18:22:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.623 18:22:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.623 18:22:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.623 18:22:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.623 18:22:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.623 { 00:16:54.623 "cntlid": 1, 00:16:54.623 "qid": 0, 00:16:54.623 "state": "enabled", 00:16:54.623 "thread": "nvmf_tgt_poll_group_000", 00:16:54.623 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:16:54.623 "listen_address": { 00:16:54.623 "trtype": "RDMA", 00:16:54.623 "adrfam": "IPv4", 00:16:54.623 "traddr": "192.168.100.8", 00:16:54.623 "trsvcid": "4420" 00:16:54.623 }, 00:16:54.623 "peer_address": { 00:16:54.623 "trtype": "RDMA", 00:16:54.623 "adrfam": "IPv4", 00:16:54.623 "traddr": "192.168.100.8", 00:16:54.623 "trsvcid": "57609" 00:16:54.623 }, 00:16:54.623 "auth": { 00:16:54.623 "state": "completed", 00:16:54.623 "digest": "sha512", 00:16:54.623 "dhgroup": "ffdhe8192" 00:16:54.623 } 00:16:54.623 } 00:16:54.623 ]' 00:16:54.623 18:22:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.882 18:22:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:54.882 18:22:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.882 18:22:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:54.882 18:22:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.882 18:22:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.882 18:22:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.882 18:22:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.141 18:22:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmM4Zjk3NjUxM2UzMWFlZWNmZjBiYjc3YTAyOWZmMTQ5ZGE1MmU1NjYxNWIwMDlhNGJhNWJkNzIxNjZiMjE0ZfUxUC4=: 00:16:55.141 18:22:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmM4Zjk3NjUxM2UzMWFlZWNmZjBiYjc3YTAyOWZmMTQ5ZGE1MmU1NjYxNWIwMDlhNGJhNWJkNzIxNjZiMjE0ZfUxUC4=: 00:16:56.083 18:22:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.342 18:22:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:16:56.342 18:22:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.342 18:22:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.342 18:22:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.342 18:22:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key3 00:16:56.342 18:22:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.342 18:22:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.342 18:22:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.342 18:22:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:16:56.342 18:22:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:16:56.602 18:22:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:56.602 18:22:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:56.602 18:22:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:56.602 18:22:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:16:56.602 18:22:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:56.602 18:22:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:16:56.602 18:22:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:56.602 18:22:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:56.602 18:22:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:56.602 18:22:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:56.861 request: 00:16:56.861 { 00:16:56.861 "name": "nvme0", 00:16:56.861 "trtype": "rdma", 00:16:56.861 "traddr": "192.168.100.8", 00:16:56.861 "adrfam": "ipv4", 00:16:56.861 "trsvcid": "4420", 00:16:56.861 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:56.861 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:16:56.861 "prchk_reftag": false, 00:16:56.861 "prchk_guard": false, 00:16:56.861 "hdgst": false, 00:16:56.861 "ddgst": false, 00:16:56.861 "dhchap_key": "key3", 00:16:56.861 "allow_unrecognized_csi": false, 00:16:56.861 "method": "bdev_nvme_attach_controller", 00:16:56.861 "req_id": 1 00:16:56.861 } 00:16:56.861 Got JSON-RPC error response 00:16:56.861 response: 00:16:56.861 { 00:16:56.861 "code": -5, 00:16:56.861 "message": "Input/output error" 00:16:56.861 } 00:16:56.861 18:22:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:56.861 18:22:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:56.861 18:22:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:56.861 18:22:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:56.861 18:22:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:16:56.861 18:22:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:16:56.861 18:22:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:56.861 18:22:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:57.122 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:57.122 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:57.122 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:57.122 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:16:57.122 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:57.122 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:16:57.122 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:57.122 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:57.122 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:57.122 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:57.122 request: 00:16:57.122 { 00:16:57.122 "name": "nvme0", 00:16:57.122 "trtype": "rdma", 00:16:57.122 "traddr": "192.168.100.8", 00:16:57.122 "adrfam": "ipv4", 00:16:57.122 "trsvcid": "4420", 00:16:57.122 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:57.122 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:16:57.122 "prchk_reftag": false, 00:16:57.122 "prchk_guard": false, 00:16:57.122 "hdgst": false, 00:16:57.122 "ddgst": false, 00:16:57.122 "dhchap_key": "key3", 00:16:57.122 "allow_unrecognized_csi": false, 00:16:57.122 "method": "bdev_nvme_attach_controller", 00:16:57.122 "req_id": 1 00:16:57.122 } 00:16:57.122 Got JSON-RPC error response 00:16:57.122 response: 00:16:57.122 { 00:16:57.122 "code": -5, 00:16:57.122 "message": "Input/output error" 00:16:57.122 } 00:16:57.382 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:57.382 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:57.382 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:57.382 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:57.382 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:57.382 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:16:57.382 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:57.382 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:57.382 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:57.382 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:57.382 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:16:57.382 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.382 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.382 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.382 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:16:57.382 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.382 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.382 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.382 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:57.382 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:57.382 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:57.382 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:16:57.642 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:57.642 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:16:57.642 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:57.642 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:57.642 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:57.642 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:57.902 request: 00:16:57.902 { 00:16:57.902 "name": "nvme0", 00:16:57.902 "trtype": "rdma", 00:16:57.902 "traddr": "192.168.100.8", 00:16:57.902 "adrfam": "ipv4", 00:16:57.902 "trsvcid": "4420", 00:16:57.902 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:57.902 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:16:57.902 "prchk_reftag": false, 00:16:57.902 "prchk_guard": false, 00:16:57.902 "hdgst": false, 00:16:57.902 "ddgst": false, 00:16:57.902 "dhchap_key": "key0", 00:16:57.902 "dhchap_ctrlr_key": "key1", 00:16:57.902 "allow_unrecognized_csi": false, 00:16:57.902 "method": "bdev_nvme_attach_controller", 00:16:57.902 "req_id": 1 00:16:57.902 } 00:16:57.902 Got JSON-RPC error response 00:16:57.902 response: 00:16:57.902 { 00:16:57.902 "code": -5, 00:16:57.902 "message": "Input/output error" 00:16:57.902 } 00:16:57.902 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:57.902 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:57.902 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:57.902 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:57.902 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:16:57.902 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:57.902 18:22:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:58.161 nvme0n1 00:16:58.161 18:22:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:16:58.161 18:22:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:16:58.161 18:22:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.420 18:22:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.420 18:22:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.420 18:22:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.679 18:22:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 00:16:58.679 18:22:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.679 18:22:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.679 18:22:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.679 18:22:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:58.679 18:22:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:58.679 18:22:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:59.248 nvme0n1 00:16:59.508 18:22:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:16:59.508 18:22:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:16:59.508 18:22:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.508 18:22:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.508 18:22:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:59.508 18:22:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.508 18:22:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.508 18:22:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.508 18:22:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:16:59.508 18:22:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.508 18:22:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:16:59.767 18:22:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.767 18:22:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: --dhchap-ctrl-secret DHHC-1:03:YmM4Zjk3NjUxM2UzMWFlZWNmZjBiYjc3YTAyOWZmMTQ5ZGE1MmU1NjYxNWIwMDlhNGJhNWJkNzIxNjZiMjE0ZfUxUC4=: 00:16:59.767 18:22:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: --dhchap-ctrl-secret DHHC-1:03:YmM4Zjk3NjUxM2UzMWFlZWNmZjBiYjc3YTAyOWZmMTQ5ZGE1MmU1NjYxNWIwMDlhNGJhNWJkNzIxNjZiMjE0ZfUxUC4=: 00:17:01.190 18:22:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:01.190 18:22:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:01.190 18:22:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:01.190 18:22:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:01.190 18:22:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:01.190 18:22:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:01.190 18:22:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:01.190 18:22:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.190 18:22:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.190 18:22:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:01.190 18:22:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:01.190 18:22:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:01.190 18:22:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:01.190 18:22:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:01.190 18:22:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:01.190 18:22:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:01.190 18:22:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:01.190 18:22:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:01.190 18:22:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:01.803 request: 00:17:01.803 { 00:17:01.803 "name": "nvme0", 00:17:01.803 "trtype": "rdma", 00:17:01.803 "traddr": "192.168.100.8", 00:17:01.803 "adrfam": "ipv4", 00:17:01.803 "trsvcid": "4420", 00:17:01.803 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:01.803 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:17:01.803 "prchk_reftag": false, 00:17:01.803 "prchk_guard": false, 00:17:01.803 "hdgst": false, 00:17:01.803 "ddgst": false, 00:17:01.803 "dhchap_key": "key1", 00:17:01.803 "allow_unrecognized_csi": false, 00:17:01.803 "method": "bdev_nvme_attach_controller", 00:17:01.803 "req_id": 1 00:17:01.803 } 00:17:01.803 Got JSON-RPC error response 00:17:01.803 response: 00:17:01.803 { 00:17:01.803 "code": -5, 00:17:01.803 "message": "Input/output error" 00:17:01.803 } 00:17:01.803 18:22:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:01.803 18:22:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:01.803 18:22:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:01.803 18:22:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:01.803 18:22:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:01.803 18:22:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:01.803 18:22:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:02.372 nvme0n1 00:17:02.372 18:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:02.372 18:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:02.372 18:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.631 18:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.632 18:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.632 18:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.891 18:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:17:02.891 18:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.891 18:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.891 18:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.891 18:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:02.891 18:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:02.891 18:22:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:03.151 nvme0n1 00:17:03.151 18:22:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:03.151 18:22:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:03.151 18:22:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.410 18:22:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.410 18:22:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.410 18:22:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.410 18:22:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:03.410 18:22:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.410 18:22:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.410 18:22:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.411 18:22:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZjBjNzBkZGRiNWY4ZTkyOWJmNWU4NDA2NDgyM2Q3M2a16XEW: '' 2s 00:17:03.670 18:22:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:03.670 18:22:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:03.670 18:22:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZjBjNzBkZGRiNWY4ZTkyOWJmNWU4NDA2NDgyM2Q3M2a16XEW: 00:17:03.670 18:22:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:03.670 18:22:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:03.670 18:22:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:03.670 18:22:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZjBjNzBkZGRiNWY4ZTkyOWJmNWU4NDA2NDgyM2Q3M2a16XEW: ]] 00:17:03.670 18:22:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZjBjNzBkZGRiNWY4ZTkyOWJmNWU4NDA2NDgyM2Q3M2a16XEW: 00:17:03.670 18:22:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:03.670 18:22:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:03.670 18:22:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:05.578 18:22:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:05.578 18:22:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:17:05.578 18:22:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:17:05.578 18:22:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:17:05.578 18:22:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:17:05.578 18:22:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:17:05.578 18:22:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:17:05.578 18:22:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:05.578 18:22:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.578 18:22:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.578 18:22:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.578 18:22:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: 2s 00:17:05.578 18:22:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:05.578 18:22:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:05.578 18:22:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:05.578 18:22:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: 00:17:05.578 18:22:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:05.578 18:22:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:05.578 18:22:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:05.578 18:22:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: ]] 00:17:05.578 18:22:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MTU2MWZhMzU2ZDM3YmZiNDM3Yzc1ZGZiYWQ4ZDM4NzE3YzQ1NmQ3MGY3Yzg3Y2Rk2l2u3g==: 00:17:05.837 18:22:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:05.837 18:22:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:07.779 18:22:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:07.779 18:22:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:17:07.779 18:22:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:17:07.779 18:22:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:17:07.779 18:22:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:17:07.779 18:22:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:17:07.779 18:22:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:17:07.779 18:22:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.039 18:22:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:08.039 18:22:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.039 18:22:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.039 18:22:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.039 18:22:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:08.039 18:22:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:08.039 18:22:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:08.608 nvme0n1 00:17:08.608 18:22:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:08.608 18:22:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.608 18:22:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.867 18:22:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.867 18:22:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:08.867 18:22:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:09.127 18:22:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:09.127 18:22:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:09.127 18:22:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.385 18:22:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.385 18:22:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:17:09.385 18:22:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.385 18:22:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.386 18:22:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.386 18:22:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:09.386 18:22:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:09.645 18:22:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:09.645 18:22:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.645 18:22:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:09.905 18:22:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.905 18:22:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:09.905 18:22:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.905 18:22:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.905 18:22:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.905 18:22:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:09.905 18:22:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:09.905 18:22:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:09.905 18:22:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:17:09.905 18:22:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:09.905 18:22:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:17:09.905 18:22:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:09.905 18:22:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:09.905 18:22:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:10.164 request: 00:17:10.164 { 00:17:10.164 "name": "nvme0", 00:17:10.164 "dhchap_key": "key1", 00:17:10.164 "dhchap_ctrlr_key": "key3", 00:17:10.164 "method": "bdev_nvme_set_keys", 00:17:10.164 "req_id": 1 00:17:10.164 } 00:17:10.164 Got JSON-RPC error response 00:17:10.164 response: 00:17:10.164 { 00:17:10.164 "code": -13, 00:17:10.164 "message": "Permission denied" 00:17:10.164 } 00:17:10.164 18:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:10.164 18:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:10.164 18:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:10.164 18:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:10.424 18:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:10.424 18:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.424 18:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:10.424 18:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:10.424 18:22:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:11.804 18:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:11.804 18:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:11.804 18:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.804 18:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:11.804 18:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:11.804 18:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.804 18:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.804 18:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.804 18:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:11.804 18:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:11.804 18:22:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:12.372 nvme0n1 00:17:12.372 18:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:12.372 18:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.372 18:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.631 18:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.631 18:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:12.631 18:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:12.631 18:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:12.631 18:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:17:12.631 18:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:12.631 18:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:17:12.631 18:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:12.631 18:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:12.631 18:22:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:12.890 request: 00:17:12.890 { 00:17:12.890 "name": "nvme0", 00:17:12.890 "dhchap_key": "key2", 00:17:12.890 "dhchap_ctrlr_key": "key0", 00:17:12.890 "method": "bdev_nvme_set_keys", 00:17:12.890 "req_id": 1 00:17:12.890 } 00:17:12.890 Got JSON-RPC error response 00:17:12.890 response: 00:17:12.890 { 00:17:12.890 "code": -13, 00:17:12.890 "message": "Permission denied" 00:17:12.890 } 00:17:12.890 18:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:12.890 18:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:12.890 18:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:12.890 18:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:12.890 18:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:12.890 18:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:12.890 18:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.149 18:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:13.149 18:22:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:14.085 18:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:14.085 18:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:14.085 18:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.345 18:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:14.345 18:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:14.345 18:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:14.345 18:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3401422 00:17:14.345 18:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3401422 ']' 00:17:14.345 18:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3401422 00:17:14.345 18:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:17:14.345 18:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:14.345 18:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3401422 00:17:14.345 18:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:14.345 18:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:14.345 18:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3401422' 00:17:14.345 killing process with pid 3401422 00:17:14.345 18:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3401422 00:17:14.345 18:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3401422 00:17:14.914 18:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:14.914 18:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:14.914 18:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:14.914 18:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:14.914 18:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:14.914 18:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:14.914 18:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:14.914 18:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:14.914 rmmod nvme_rdma 00:17:14.914 rmmod nvme_fabrics 00:17:14.914 18:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:14.914 18:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:14.914 18:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:14.914 18:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 3423927 ']' 00:17:14.914 18:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 3423927 00:17:14.914 18:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3423927 ']' 00:17:14.914 18:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3423927 00:17:14.914 18:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:17:14.914 18:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:14.914 18:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3423927 00:17:14.914 18:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:14.914 18:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:14.914 18:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3423927' 00:17:14.914 killing process with pid 3423927 00:17:14.914 18:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3423927 00:17:14.914 18:22:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3423927 00:17:15.174 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:15.174 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:17:15.174 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.BHX /tmp/spdk.key-sha256.biS /tmp/spdk.key-sha384.XGg /tmp/spdk.key-sha512.Ro6 /tmp/spdk.key-sha512.aKz /tmp/spdk.key-sha384.CAA /tmp/spdk.key-sha256.Vib '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:17:15.174 00:17:15.174 real 3m10.897s 00:17:15.174 user 7m14.755s 00:17:15.174 sys 0m38.779s 00:17:15.174 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:15.174 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.174 ************************************ 00:17:15.174 END TEST nvmf_auth_target 00:17:15.174 ************************************ 00:17:15.174 18:22:28 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' rdma = tcp ']' 00:17:15.174 18:22:28 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:17:15.174 18:22:28 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:17:15.174 18:22:28 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' rdma = tcp ']' 00:17:15.174 18:22:28 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@60 -- # [[ rdma == \r\d\m\a ]] 00:17:15.174 18:22:28 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:17:15.174 18:22:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:15.174 18:22:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:15.174 18:22:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:15.174 ************************************ 00:17:15.174 START TEST nvmf_srq_overwhelm 00:17:15.174 ************************************ 00:17:15.174 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:17:15.436 * Looking for test storage... 00:17:15.436 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:15.436 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:15.436 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1681 -- # lcov --version 00:17:15.436 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:15.436 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:15.436 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:15.436 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:15.436 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:15.436 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # IFS=.-: 00:17:15.436 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # read -ra ver1 00:17:15.436 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # IFS=.-: 00:17:15.436 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # read -ra ver2 00:17:15.436 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@338 -- # local 'op=<' 00:17:15.436 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@340 -- # ver1_l=2 00:17:15.436 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@341 -- # ver2_l=1 00:17:15.436 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:15.436 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@344 -- # case "$op" in 00:17:15.436 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@345 -- # : 1 00:17:15.436 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:15.436 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:15.436 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # decimal 1 00:17:15.436 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=1 00:17:15.436 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:15.436 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 1 00:17:15.436 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # ver1[v]=1 00:17:15.436 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # decimal 2 00:17:15.436 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=2 00:17:15.436 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:15.436 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 2 00:17:15.436 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # ver2[v]=2 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # return 0 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:15.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:15.437 --rc genhtml_branch_coverage=1 00:17:15.437 --rc genhtml_function_coverage=1 00:17:15.437 --rc genhtml_legend=1 00:17:15.437 --rc geninfo_all_blocks=1 00:17:15.437 --rc geninfo_unexecuted_blocks=1 00:17:15.437 00:17:15.437 ' 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:15.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:15.437 --rc genhtml_branch_coverage=1 00:17:15.437 --rc genhtml_function_coverage=1 00:17:15.437 --rc genhtml_legend=1 00:17:15.437 --rc geninfo_all_blocks=1 00:17:15.437 --rc geninfo_unexecuted_blocks=1 00:17:15.437 00:17:15.437 ' 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:15.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:15.437 --rc genhtml_branch_coverage=1 00:17:15.437 --rc genhtml_function_coverage=1 00:17:15.437 --rc genhtml_legend=1 00:17:15.437 --rc geninfo_all_blocks=1 00:17:15.437 --rc geninfo_unexecuted_blocks=1 00:17:15.437 00:17:15.437 ' 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:15.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:15.437 --rc genhtml_branch_coverage=1 00:17:15.437 --rc genhtml_function_coverage=1 00:17:15.437 --rc genhtml_legend=1 00:17:15.437 --rc geninfo_all_blocks=1 00:17:15.437 --rc geninfo_unexecuted_blocks=1 00:17:15.437 00:17:15.437 ' 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@15 -- # shopt -s extglob 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # : 0 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:15.437 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@309 -- # xtrace_disable 00:17:15.437 18:22:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:22.010 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:22.010 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # pci_devs=() 00:17:22.010 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:22.010 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:22.010 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:22.010 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:22.010 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:22.010 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # net_devs=() 00:17:22.010 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:22.010 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # e810=() 00:17:22.010 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # local -ga e810 00:17:22.010 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # x722=() 00:17:22.010 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # local -ga x722 00:17:22.010 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # mlx=() 00:17:22.010 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # local -ga mlx 00:17:22.010 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:22.010 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:22.010 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:22.010 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:22.010 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:17:22.271 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:17:22.271 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:17:22.271 Found net devices under 0000:18:00.0: mlx_0_0 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:17:22.271 Found net devices under 0000:18:00.1: mlx_0_1 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@440 -- # is_hw=yes 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@446 -- # rdma_device_init 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # uname 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@528 -- # allocate_nic_ips 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:17:22.271 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:22.272 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:22.272 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:17:22.272 altname enp24s0f0np0 00:17:22.272 altname ens785f0np0 00:17:22.272 inet 192.168.100.8/24 scope global mlx_0_0 00:17:22.272 valid_lft forever preferred_lft forever 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:22.272 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:22.272 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:17:22.272 altname enp24s0f1np1 00:17:22.272 altname ens785f1np1 00:17:22.272 inet 192.168.100.9/24 scope global mlx_0_1 00:17:22.272 valid_lft forever preferred_lft forever 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # return 0 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:17:22.272 192.168.100.9' 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:17:22.272 192.168.100.9' 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # head -n 1 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:17:22.272 192.168.100.9' 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # tail -n +2 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # head -n 1 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:17:22.272 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:17:22.532 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:17:22.532 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:22.532 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:22.532 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:22.532 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@507 -- # nvmfpid=3429905 00:17:22.532 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:22.532 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@508 -- # waitforlisten 3429905 00:17:22.532 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@831 -- # '[' -z 3429905 ']' 00:17:22.532 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.532 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:22.532 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.532 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:22.532 18:22:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:22.532 [2024-10-08 18:22:35.503865] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:17:22.532 [2024-10-08 18:22:35.503924] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.532 [2024-10-08 18:22:35.588121] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:22.532 [2024-10-08 18:22:35.678957] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.532 [2024-10-08 18:22:35.679005] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.532 [2024-10-08 18:22:35.679015] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:22.532 [2024-10-08 18:22:35.679024] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:22.532 [2024-10-08 18:22:35.679031] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.532 [2024-10-08 18:22:35.680485] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.532 [2024-10-08 18:22:35.680521] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:17:22.532 [2024-10-08 18:22:35.680619] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.532 [2024-10-08 18:22:35.680621] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:17:23.469 18:22:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:23.469 18:22:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@864 -- # return 0 00:17:23.469 18:22:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:23.469 18:22:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:23.469 18:22:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:23.469 18:22:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:23.469 18:22:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:17:23.469 18:22:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.469 18:22:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:23.469 [2024-10-08 18:22:36.432760] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x136b2e0/0x136f7d0) succeed. 00:17:23.469 [2024-10-08 18:22:36.443303] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x136c920/0x13b0e70) succeed. 00:17:23.469 18:22:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.469 18:22:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:17:23.469 18:22:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:17:23.469 18:22:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:17:23.469 18:22:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.469 18:22:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:23.469 18:22:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.469 18:22:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:23.469 18:22:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.469 18:22:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:23.469 Malloc0 00:17:23.469 18:22:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.469 18:22:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:17:23.469 18:22:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.469 18:22:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:23.469 18:22:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.469 18:22:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:17:23.469 18:22:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.469 18:22:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:23.469 [2024-10-08 18:22:36.541527] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:23.469 18:22:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.469 18:22:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:17:24.405 18:22:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:17:24.405 18:22:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:17:24.405 18:22:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:17:24.405 18:22:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:17:24.405 18:22:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:17:24.405 18:22:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:17:24.405 18:22:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:17:24.405 18:22:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:17:24.405 18:22:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:24.405 18:22:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.405 18:22:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:24.405 18:22:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.405 18:22:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:24.405 18:22:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.405 18:22:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:24.664 Malloc1 00:17:24.664 18:22:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.664 18:22:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:24.664 18:22:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.664 18:22:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:24.664 18:22:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.664 18:22:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:24.664 18:22:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.664 18:22:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:24.664 18:22:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.664 18:22:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:25.601 18:22:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:17:25.601 18:22:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:17:25.601 18:22:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:17:25.601 18:22:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme1n1 00:17:25.601 18:22:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:17:25.601 18:22:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme1n1 00:17:25.601 18:22:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:17:25.601 18:22:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:17:25.601 18:22:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:17:25.601 18:22:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.601 18:22:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:25.601 18:22:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.601 18:22:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:17:25.601 18:22:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.601 18:22:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:25.601 Malloc2 00:17:25.601 18:22:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.601 18:22:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:17:25.601 18:22:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.601 18:22:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:25.601 18:22:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.601 18:22:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:17:25.601 18:22:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.601 18:22:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:25.601 18:22:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.601 18:22:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:17:26.538 18:22:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:17:26.538 18:22:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:17:26.538 18:22:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:17:26.538 18:22:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme2n1 00:17:26.538 18:22:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:17:26.538 18:22:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme2n1 00:17:26.538 18:22:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:17:26.538 18:22:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:17:26.538 18:22:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:17:26.538 18:22:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.538 18:22:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:26.538 18:22:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.538 18:22:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:17:26.538 18:22:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.538 18:22:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:26.538 Malloc3 00:17:26.538 18:22:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.538 18:22:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:17:26.538 18:22:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.538 18:22:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:26.538 18:22:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.538 18:22:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:17:26.538 18:22:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.538 18:22:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:26.538 18:22:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.538 18:22:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:17:27.916 18:22:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:17:27.916 18:22:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:17:27.916 18:22:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:17:27.916 18:22:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme3n1 00:17:27.916 18:22:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:17:27.917 18:22:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme3n1 00:17:27.917 18:22:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:17:27.917 18:22:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:17:27.917 18:22:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:17:27.917 18:22:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.917 18:22:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:27.917 18:22:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.917 18:22:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:17:27.917 18:22:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.917 18:22:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:27.917 Malloc4 00:17:27.917 18:22:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.917 18:22:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:17:27.917 18:22:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.917 18:22:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:27.917 18:22:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.917 18:22:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:17:27.917 18:22:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.917 18:22:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:27.917 18:22:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.917 18:22:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:17:28.853 18:22:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:17:28.853 18:22:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:17:28.853 18:22:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:17:28.853 18:22:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme4n1 00:17:28.853 18:22:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:17:28.853 18:22:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme4n1 00:17:28.853 18:22:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:17:28.853 18:22:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:17:28.853 18:22:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:17:28.853 18:22:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.853 18:22:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:28.853 18:22:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.853 18:22:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:17:28.853 18:22:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.853 18:22:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:28.853 Malloc5 00:17:28.854 18:22:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.854 18:22:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:17:28.854 18:22:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.854 18:22:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:28.854 18:22:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.854 18:22:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:17:28.854 18:22:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.854 18:22:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:28.854 18:22:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.854 18:22:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:17:29.790 18:22:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:17:29.790 18:22:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:17:29.790 18:22:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:17:29.790 18:22:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme5n1 00:17:29.790 18:22:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme5n1 00:17:29.790 18:22:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:17:29.790 18:22:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:17:29.790 18:22:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:17:29.790 [global] 00:17:29.790 thread=1 00:17:29.790 invalidate=1 00:17:29.790 rw=read 00:17:29.790 time_based=1 00:17:29.790 runtime=10 00:17:29.790 ioengine=libaio 00:17:29.790 direct=1 00:17:29.790 bs=1048576 00:17:29.790 iodepth=128 00:17:29.790 norandommap=1 00:17:29.790 numjobs=13 00:17:29.790 00:17:29.790 [job0] 00:17:29.790 filename=/dev/nvme0n1 00:17:29.790 [job1] 00:17:29.790 filename=/dev/nvme1n1 00:17:29.790 [job2] 00:17:29.790 filename=/dev/nvme2n1 00:17:29.790 [job3] 00:17:29.790 filename=/dev/nvme3n1 00:17:29.790 [job4] 00:17:29.790 filename=/dev/nvme4n1 00:17:29.790 [job5] 00:17:29.790 filename=/dev/nvme5n1 00:17:30.048 Could not set queue depth (nvme0n1) 00:17:30.048 Could not set queue depth (nvme1n1) 00:17:30.048 Could not set queue depth (nvme2n1) 00:17:30.048 Could not set queue depth (nvme3n1) 00:17:30.048 Could not set queue depth (nvme4n1) 00:17:30.048 Could not set queue depth (nvme5n1) 00:17:30.048 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:17:30.048 ... 00:17:30.048 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:17:30.048 ... 00:17:30.048 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:17:30.048 ... 00:17:30.048 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:17:30.048 ... 00:17:30.048 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:17:30.048 ... 00:17:30.048 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:17:30.048 ... 00:17:30.048 fio-3.35 00:17:30.048 Starting 78 threads 00:17:48.146 00:17:48.146 job0: (groupid=0, jobs=1): err= 0: pid=3431140: Tue Oct 8 18:22:58 2024 00:17:48.146 read: IOPS=1, BW=1107KiB/s (1133kB/s)(16.0MiB/14804msec) 00:17:48.146 slat (usec): min=871, max=6389.6k, avg=795562.54, stdev=2135015.32 00:17:48.146 clat (msec): min=2073, max=14801, avg=13149.94, stdev=3638.12 00:17:48.146 lat (msec): min=8463, max=14802, avg=13945.50, stdev=2136.45 00:17:48.146 clat percentiles (msec): 00:17:48.146 | 1.00th=[ 2072], 5.00th=[ 2072], 10.00th=[ 8490], 20.00th=[14563], 00:17:48.146 | 30.00th=[14697], 40.00th=[14697], 50.00th=[14697], 60.00th=[14832], 00:17:48.146 | 70.00th=[14832], 80.00th=[14832], 90.00th=[14832], 95.00th=[14832], 00:17:48.146 | 99.00th=[14832], 99.50th=[14832], 99.90th=[14832], 99.95th=[14832], 00:17:48.146 | 99.99th=[14832] 00:17:48.146 lat (msec) : >=2000=100.00% 00:17:48.146 cpu : usr=0.00%, sys=0.11%, ctx=26, majf=0, minf=4097 00:17:48.146 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:17:48.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.146 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.146 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.146 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.146 job0: (groupid=0, jobs=1): err= 0: pid=3431141: Tue Oct 8 18:22:58 2024 00:17:48.146 read: IOPS=1, BW=1775KiB/s (1818kB/s)(22.0MiB/12689msec) 00:17:48.146 slat (usec): min=863, max=4300.0k, avg=479413.63, stdev=1107624.25 00:17:48.146 clat (msec): min=2141, max=12687, avg=11252.99, stdev=2995.70 00:17:48.146 lat (msec): min=4252, max=12688, avg=11732.40, stdev=2208.61 00:17:48.146 clat percentiles (msec): 00:17:48.146 | 1.00th=[ 2140], 5.00th=[ 4245], 10.00th=[ 6409], 20.00th=[10671], 00:17:48.146 | 30.00th=[12550], 40.00th=[12684], 50.00th=[12684], 60.00th=[12684], 00:17:48.146 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12684], 95.00th=[12684], 00:17:48.146 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:17:48.146 | 99.99th=[12684] 00:17:48.146 lat (msec) : >=2000=100.00% 00:17:48.146 cpu : usr=0.00%, sys=0.19%, ctx=13, majf=0, minf=5633 00:17:48.146 IO depths : 1=4.5%, 2=9.1%, 4=18.2%, 8=36.4%, 16=31.8%, 32=0.0%, >=64=0.0% 00:17:48.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.146 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:17:48.146 issued rwts: total=22,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.146 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.146 job0: (groupid=0, jobs=1): err= 0: pid=3431142: Tue Oct 8 18:22:58 2024 00:17:48.146 read: IOPS=4, BW=4880KiB/s (4998kB/s)(71.0MiB/14897msec) 00:17:48.146 slat (usec): min=977, max=6414.4k, avg=180608.99, stdev=1009179.24 00:17:48.146 clat (msec): min=2072, max=14890, avg=14299.00, stdev=1654.76 00:17:48.146 lat (msec): min=8487, max=14896, avg=14479.61, stdev=758.12 00:17:48.146 clat percentiles (msec): 00:17:48.146 | 1.00th=[ 2072], 5.00th=[14160], 10.00th=[14295], 20.00th=[14295], 00:17:48.146 | 30.00th=[14429], 40.00th=[14429], 50.00th=[14563], 60.00th=[14563], 00:17:48.146 | 70.00th=[14697], 80.00th=[14832], 90.00th=[14832], 95.00th=[14832], 00:17:48.146 | 99.00th=[14832], 99.50th=[14832], 99.90th=[14832], 99.95th=[14832], 00:17:48.146 | 99.99th=[14832] 00:17:48.146 lat (msec) : >=2000=100.00% 00:17:48.146 cpu : usr=0.01%, sys=0.52%, ctx=109, majf=0, minf=18177 00:17:48.146 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.3%, 16=22.5%, 32=45.1%, >=64=11.3% 00:17:48.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.146 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:48.146 issued rwts: total=71,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.146 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.146 job0: (groupid=0, jobs=1): err= 0: pid=3431143: Tue Oct 8 18:22:58 2024 00:17:48.146 read: IOPS=5, BW=5614KiB/s (5748kB/s)(58.0MiB/10580msec) 00:17:48.146 slat (usec): min=1041, max=2154.8k, avg=181922.79, stdev=570850.19 00:17:48.146 clat (msec): min=27, max=10578, avg=6127.50, stdev=3795.80 00:17:48.146 lat (msec): min=1968, max=10579, avg=6309.43, stdev=3750.92 00:17:48.146 clat percentiles (msec): 00:17:48.146 | 1.00th=[ 28], 5.00th=[ 1989], 10.00th=[ 2056], 20.00th=[ 2072], 00:17:48.146 | 30.00th=[ 2106], 40.00th=[ 4212], 50.00th=[ 4245], 60.00th=[ 8490], 00:17:48.146 | 70.00th=[10402], 80.00th=[10537], 90.00th=[10537], 95.00th=[10537], 00:17:48.146 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:17:48.146 | 99.99th=[10537] 00:17:48.146 lat (msec) : 50=1.72%, 2000=3.45%, >=2000=94.83% 00:17:48.146 cpu : usr=0.00%, sys=0.60%, ctx=50, majf=0, minf=14849 00:17:48.146 IO depths : 1=1.7%, 2=3.4%, 4=6.9%, 8=13.8%, 16=27.6%, 32=46.6%, >=64=0.0% 00:17:48.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.146 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:48.146 issued rwts: total=58,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.146 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.146 job0: (groupid=0, jobs=1): err= 0: pid=3431144: Tue Oct 8 18:22:58 2024 00:17:48.146 read: IOPS=5, BW=5617KiB/s (5752kB/s)(58.0MiB/10574msec) 00:17:48.146 slat (usec): min=965, max=2150.4k, avg=181818.13, stdev=571211.61 00:17:48.146 clat (msec): min=27, max=10572, avg=6197.20, stdev=3812.65 00:17:48.146 lat (msec): min=1978, max=10573, avg=6379.02, stdev=3764.42 00:17:48.146 clat percentiles (msec): 00:17:48.146 | 1.00th=[ 28], 5.00th=[ 1989], 10.00th=[ 2056], 20.00th=[ 2089], 00:17:48.146 | 30.00th=[ 2106], 40.00th=[ 4212], 50.00th=[ 4279], 60.00th=[ 8490], 00:17:48.146 | 70.00th=[10402], 80.00th=[10537], 90.00th=[10537], 95.00th=[10537], 00:17:48.146 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:17:48.146 | 99.99th=[10537] 00:17:48.146 lat (msec) : 50=1.72%, 2000=3.45%, >=2000=94.83% 00:17:48.146 cpu : usr=0.00%, sys=0.60%, ctx=50, majf=0, minf=14849 00:17:48.146 IO depths : 1=1.7%, 2=3.4%, 4=6.9%, 8=13.8%, 16=27.6%, 32=46.6%, >=64=0.0% 00:17:48.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.146 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:48.146 issued rwts: total=58,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.146 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.146 job0: (groupid=0, jobs=1): err= 0: pid=3431145: Tue Oct 8 18:22:58 2024 00:17:48.146 read: IOPS=1, BW=1857KiB/s (1902kB/s)(27.0MiB/14888msec) 00:17:48.146 slat (usec): min=993, max=6394.5k, avg=474353.98, stdev=1676810.26 00:17:48.146 clat (msec): min=2079, max=14884, avg=13880.89, stdev=2900.98 00:17:48.146 lat (msec): min=8473, max=14886, avg=14355.24, stdev=1692.36 00:17:48.146 clat percentiles (msec): 00:17:48.146 | 1.00th=[ 2072], 5.00th=[ 8490], 10.00th=[ 8490], 20.00th=[14697], 00:17:48.146 | 30.00th=[14697], 40.00th=[14832], 50.00th=[14832], 60.00th=[14832], 00:17:48.146 | 70.00th=[14832], 80.00th=[14832], 90.00th=[14832], 95.00th=[14832], 00:17:48.146 | 99.00th=[14832], 99.50th=[14832], 99.90th=[14832], 99.95th=[14832], 00:17:48.146 | 99.99th=[14832] 00:17:48.146 lat (msec) : >=2000=100.00% 00:17:48.146 cpu : usr=0.00%, sys=0.20%, ctx=33, majf=0, minf=6913 00:17:48.146 IO depths : 1=3.7%, 2=7.4%, 4=14.8%, 8=29.6%, 16=44.4%, 32=0.0%, >=64=0.0% 00:17:48.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.146 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:17:48.146 issued rwts: total=27,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.146 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.146 job0: (groupid=0, jobs=1): err= 0: pid=3431146: Tue Oct 8 18:22:58 2024 00:17:48.146 read: IOPS=41, BW=41.9MiB/s (43.9MB/s)(533MiB/12717msec) 00:17:48.146 slat (usec): min=429, max=4363.3k, avg=19843.66, stdev=263506.59 00:17:48.146 clat (msec): min=283, max=11299, avg=2940.47, stdev=4500.27 00:17:48.146 lat (msec): min=285, max=11301, avg=2960.32, stdev=4513.20 00:17:48.146 clat percentiles (msec): 00:17:48.146 | 1.00th=[ 284], 5.00th=[ 288], 10.00th=[ 296], 20.00th=[ 355], 00:17:48.146 | 30.00th=[ 359], 40.00th=[ 405], 50.00th=[ 443], 60.00th=[ 472], 00:17:48.147 | 70.00th=[ 535], 80.00th=[10939], 90.00th=[11073], 95.00th=[11208], 00:17:48.147 | 99.00th=[11342], 99.50th=[11342], 99.90th=[11342], 99.95th=[11342], 00:17:48.147 | 99.99th=[11342] 00:17:48.147 bw ( KiB/s): min= 2052, max=407552, per=7.59%, avg=166295.80, stdev=169221.94, samples=5 00:17:48.147 iops : min= 2, max= 398, avg=162.20, stdev=165.49, samples=5 00:17:48.147 lat (msec) : 500=63.04%, 750=12.38%, >=2000=24.58% 00:17:48.147 cpu : usr=0.03%, sys=0.97%, ctx=1009, majf=0, minf=32769 00:17:48.147 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.0%, 32=6.0%, >=64=88.2% 00:17:48.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.147 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:48.147 issued rwts: total=533,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.147 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.147 job0: (groupid=0, jobs=1): err= 0: pid=3431147: Tue Oct 8 18:22:58 2024 00:17:48.147 read: IOPS=181, BW=182MiB/s (190MB/s)(2306MiB/12696msec) 00:17:48.147 slat (usec): min=43, max=4259.8k, avg=4575.98, stdev=105761.16 00:17:48.147 clat (msec): min=91, max=12537, avg=421.51, stdev=1344.50 00:17:48.147 lat (msec): min=92, max=12538, avg=426.09, stdev=1364.53 00:17:48.147 clat percentiles (msec): 00:17:48.147 | 1.00th=[ 93], 5.00th=[ 93], 10.00th=[ 94], 20.00th=[ 94], 00:17:48.147 | 30.00th=[ 95], 40.00th=[ 95], 50.00th=[ 95], 60.00th=[ 96], 00:17:48.147 | 70.00th=[ 99], 80.00th=[ 108], 90.00th=[ 133], 95.00th=[ 4396], 00:17:48.147 | 99.00th=[ 6409], 99.50th=[ 6409], 99.90th=[12550], 99.95th=[12550], 00:17:48.147 | 99.99th=[12550] 00:17:48.147 bw ( KiB/s): min= 2052, max=1226752, per=33.92%, avg=743523.33, stdev=537873.61, samples=6 00:17:48.147 iops : min= 2, max= 1198, avg=726.00, stdev=525.27, samples=6 00:17:48.147 lat (msec) : 100=70.90%, 250=22.81%, >=2000=6.29% 00:17:48.147 cpu : usr=0.06%, sys=1.69%, ctx=2152, majf=0, minf=32769 00:17:48.147 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:17:48.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.147 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:48.147 issued rwts: total=2306,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.147 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.147 job0: (groupid=0, jobs=1): err= 0: pid=3431148: Tue Oct 8 18:22:58 2024 00:17:48.147 read: IOPS=0, BW=278KiB/s (284kB/s)(4096KiB/14751msec) 00:17:48.147 slat (msec): min=24, max=6418, avg=3170.44, stdev=3628.26 00:17:48.147 clat (msec): min=2068, max=14726, avg=9993.84, stdev=6042.85 00:17:48.147 lat (msec): min=8487, max=14750, avg=13164.28, stdev=3118.03 00:17:48.147 clat percentiles (msec): 00:17:48.147 | 1.00th=[ 2072], 5.00th=[ 2072], 10.00th=[ 2072], 20.00th=[ 2072], 00:17:48.147 | 30.00th=[ 8490], 40.00th=[ 8490], 50.00th=[ 8490], 60.00th=[14697], 00:17:48.147 | 70.00th=[14697], 80.00th=[14697], 90.00th=[14697], 95.00th=[14697], 00:17:48.147 | 99.00th=[14697], 99.50th=[14697], 99.90th=[14697], 99.95th=[14697], 00:17:48.147 | 99.99th=[14697] 00:17:48.147 lat (msec) : >=2000=100.00% 00:17:48.147 cpu : usr=0.00%, sys=0.03%, ctx=16, majf=0, minf=1025 00:17:48.147 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:48.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.147 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.147 issued rwts: total=4,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.147 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.147 job0: (groupid=0, jobs=1): err= 0: pid=3431149: Tue Oct 8 18:22:58 2024 00:17:48.147 read: IOPS=23, BW=23.2MiB/s (24.4MB/s)(345MiB/14849msec) 00:17:48.147 slat (usec): min=58, max=10768k, avg=37014.76, stdev=579466.37 00:17:48.147 clat (msec): min=571, max=13446, avg=5227.45, stdev=6042.21 00:17:48.147 lat (msec): min=575, max=13448, avg=5264.46, stdev=6052.90 00:17:48.147 clat percentiles (msec): 00:17:48.147 | 1.00th=[ 575], 5.00th=[ 575], 10.00th=[ 575], 20.00th=[ 575], 00:17:48.147 | 30.00th=[ 584], 40.00th=[ 600], 50.00th=[ 625], 60.00th=[ 776], 00:17:48.147 | 70.00th=[12953], 80.00th=[13087], 90.00th=[13221], 95.00th=[13355], 00:17:48.147 | 99.00th=[13355], 99.50th=[13355], 99.90th=[13489], 99.95th=[13489], 00:17:48.147 | 99.99th=[13489] 00:17:48.147 bw ( KiB/s): min= 2048, max=221184, per=5.09%, avg=111617.00, stdev=126517.07, samples=4 00:17:48.147 iops : min= 2, max= 216, avg=109.00, stdev=123.55, samples=4 00:17:48.147 lat (msec) : 750=57.97%, 1000=4.93%, >=2000=37.10% 00:17:48.147 cpu : usr=0.01%, sys=0.97%, ctx=329, majf=0, minf=32769 00:17:48.147 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.3%, 16=4.6%, 32=9.3%, >=64=81.7% 00:17:48.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.147 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:17:48.147 issued rwts: total=345,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.147 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.147 job0: (groupid=0, jobs=1): err= 0: pid=3431150: Tue Oct 8 18:22:58 2024 00:17:48.147 read: IOPS=104, BW=105MiB/s (110MB/s)(1335MiB/12748msec) 00:17:48.147 slat (usec): min=51, max=2132.7k, avg=7970.87, stdev=88462.60 00:17:48.147 clat (msec): min=132, max=4963, avg=1172.95, stdev=1500.83 00:17:48.147 lat (msec): min=133, max=4965, avg=1180.92, stdev=1505.50 00:17:48.147 clat percentiles (msec): 00:17:48.147 | 1.00th=[ 136], 5.00th=[ 180], 10.00th=[ 236], 20.00th=[ 279], 00:17:48.147 | 30.00th=[ 292], 40.00th=[ 330], 50.00th=[ 498], 60.00th=[ 676], 00:17:48.147 | 70.00th=[ 751], 80.00th=[ 1905], 90.00th=[ 3876], 95.00th=[ 4866], 00:17:48.147 | 99.00th=[ 4933], 99.50th=[ 4933], 99.90th=[ 4933], 99.95th=[ 4933], 00:17:48.147 | 99.99th=[ 4933] 00:17:48.147 bw ( KiB/s): min= 2052, max=565248, per=8.68%, avg=190275.54, stdev=173741.54, samples=13 00:17:48.147 iops : min= 2, max= 552, avg=185.69, stdev=169.75, samples=13 00:17:48.147 lat (msec) : 250=11.24%, 500=38.80%, 750=19.70%, 1000=10.19%, 2000=0.07% 00:17:48.147 lat (msec) : >=2000=20.00% 00:17:48.147 cpu : usr=0.02%, sys=2.06%, ctx=1154, majf=0, minf=32769 00:17:48.147 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.3% 00:17:48.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.147 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:48.147 issued rwts: total=1335,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.147 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.147 job0: (groupid=0, jobs=1): err= 0: pid=3431151: Tue Oct 8 18:22:58 2024 00:17:48.147 read: IOPS=42, BW=42.9MiB/s (45.0MB/s)(550MiB/12823msec) 00:17:48.147 slat (usec): min=157, max=4274.1k, avg=19421.51, stdev=256741.84 00:17:48.147 clat (msec): min=292, max=11292, avg=2873.46, stdev=4449.88 00:17:48.147 lat (msec): min=294, max=11294, avg=2892.88, stdev=4462.72 00:17:48.147 clat percentiles (msec): 00:17:48.147 | 1.00th=[ 296], 5.00th=[ 300], 10.00th=[ 300], 20.00th=[ 363], 00:17:48.147 | 30.00th=[ 368], 40.00th=[ 430], 50.00th=[ 460], 60.00th=[ 489], 00:17:48.147 | 70.00th=[ 527], 80.00th=[10805], 90.00th=[11073], 95.00th=[11208], 00:17:48.147 | 99.00th=[11342], 99.50th=[11342], 99.90th=[11342], 99.95th=[11342], 00:17:48.147 | 99.99th=[11342] 00:17:48.147 bw ( KiB/s): min= 1410, max=382976, per=6.58%, avg=144184.00, stdev=165015.72, samples=6 00:17:48.147 iops : min= 1, max= 374, avg=140.33, stdev=161.48, samples=6 00:17:48.147 lat (msec) : 500=61.82%, 750=14.55%, >=2000=23.64% 00:17:48.147 cpu : usr=0.00%, sys=1.08%, ctx=1018, majf=0, minf=32769 00:17:48.147 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=2.9%, 32=5.8%, >=64=88.5% 00:17:48.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.147 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:48.147 issued rwts: total=550,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.147 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.147 job0: (groupid=0, jobs=1): err= 0: pid=3431152: Tue Oct 8 18:22:58 2024 00:17:48.147 read: IOPS=1, BW=1384KiB/s (1417kB/s)(20.0MiB/14803msec) 00:17:48.147 slat (usec): min=712, max=6396.7k, avg=636366.14, stdev=1926112.50 00:17:48.147 clat (msec): min=2075, max=14801, avg=13454.45, stdev=3291.57 00:17:48.147 lat (msec): min=8472, max=14802, avg=14090.81, stdev=1920.72 00:17:48.147 clat percentiles (msec): 00:17:48.147 | 1.00th=[ 2072], 5.00th=[ 2072], 10.00th=[ 8490], 20.00th=[14563], 00:17:48.147 | 30.00th=[14697], 40.00th=[14697], 50.00th=[14697], 60.00th=[14697], 00:17:48.147 | 70.00th=[14832], 80.00th=[14832], 90.00th=[14832], 95.00th=[14832], 00:17:48.147 | 99.00th=[14832], 99.50th=[14832], 99.90th=[14832], 99.95th=[14832], 00:17:48.147 | 99.99th=[14832] 00:17:48.147 lat (msec) : >=2000=100.00% 00:17:48.147 cpu : usr=0.00%, sys=0.11%, ctx=25, majf=0, minf=5121 00:17:48.147 IO depths : 1=5.0%, 2=10.0%, 4=20.0%, 8=40.0%, 16=25.0%, 32=0.0%, >=64=0.0% 00:17:48.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.147 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:17:48.147 issued rwts: total=20,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.147 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.147 job1: (groupid=0, jobs=1): err= 0: pid=3431168: Tue Oct 8 18:22:58 2024 00:17:48.147 read: IOPS=0, BW=402KiB/s (412kB/s)(5120KiB/12722msec) 00:17:48.147 slat (msec): min=20, max=6301, avg=2110.13, stdev=2562.55 00:17:48.147 clat (msec): min=2171, max=6419, avg=4693.83, stdev=1776.43 00:17:48.147 lat (msec): min=4234, max=12721, avg=6803.96, stdev=3480.11 00:17:48.147 clat percentiles (msec): 00:17:48.147 | 1.00th=[ 2165], 5.00th=[ 2165], 10.00th=[ 2165], 20.00th=[ 2165], 00:17:48.147 | 30.00th=[ 4245], 40.00th=[ 4245], 50.00th=[ 4245], 60.00th=[ 4245], 00:17:48.147 | 70.00th=[ 6409], 80.00th=[ 6409], 90.00th=[ 6409], 95.00th=[ 6409], 00:17:48.147 | 99.00th=[ 6409], 99.50th=[ 6409], 99.90th=[ 6409], 99.95th=[ 6409], 00:17:48.147 | 99.99th=[ 6409] 00:17:48.147 lat (msec) : >=2000=100.00% 00:17:48.147 cpu : usr=0.00%, sys=0.04%, ctx=10, majf=0, minf=1281 00:17:48.147 IO depths : 1=20.0%, 2=40.0%, 4=40.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:48.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.147 complete : 0=0.0%, 4=0.0%, 8=100.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.147 issued rwts: total=5,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.147 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.147 job1: (groupid=0, jobs=1): err= 0: pid=3431169: Tue Oct 8 18:22:58 2024 00:17:48.147 read: IOPS=3, BW=3618KiB/s (3705kB/s)(45.0MiB/12735msec) 00:17:48.147 slat (usec): min=982, max=2066.1k, avg=234946.99, stdev=635584.97 00:17:48.148 clat (msec): min=2161, max=12732, avg=9227.30, stdev=3420.08 00:17:48.148 lat (msec): min=4213, max=12734, avg=9462.25, stdev=3284.14 00:17:48.148 clat percentiles (msec): 00:17:48.148 | 1.00th=[ 2165], 5.00th=[ 4245], 10.00th=[ 4279], 20.00th=[ 4329], 00:17:48.148 | 30.00th=[ 6477], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[10671], 00:17:48.148 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12684], 95.00th=[12684], 00:17:48.148 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:17:48.148 | 99.99th=[12684] 00:17:48.148 lat (msec) : >=2000=100.00% 00:17:48.148 cpu : usr=0.01%, sys=0.38%, ctx=40, majf=0, minf=11521 00:17:48.148 IO depths : 1=2.2%, 2=4.4%, 4=8.9%, 8=17.8%, 16=35.6%, 32=31.1%, >=64=0.0% 00:17:48.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.148 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:48.148 issued rwts: total=45,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.148 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.148 job1: (groupid=0, jobs=1): err= 0: pid=3431171: Tue Oct 8 18:22:58 2024 00:17:48.148 read: IOPS=4, BW=4667KiB/s (4779kB/s)(68.0MiB/14919msec) 00:17:48.148 slat (usec): min=643, max=2149.6k, avg=157534.90, stdev=543559.64 00:17:48.148 clat (msec): min=4205, max=14917, avg=13982.52, stdev=2326.28 00:17:48.148 lat (msec): min=6292, max=14918, avg=14140.06, stdev=1993.18 00:17:48.148 clat percentiles (msec): 00:17:48.148 | 1.00th=[ 4212], 5.00th=[ 8423], 10.00th=[10537], 20.00th=[14697], 00:17:48.148 | 30.00th=[14697], 40.00th=[14832], 50.00th=[14832], 60.00th=[14832], 00:17:48.148 | 70.00th=[14832], 80.00th=[14832], 90.00th=[14966], 95.00th=[14966], 00:17:48.148 | 99.00th=[14966], 99.50th=[14966], 99.90th=[14966], 99.95th=[14966], 00:17:48.148 | 99.99th=[14966] 00:17:48.148 lat (msec) : >=2000=100.00% 00:17:48.148 cpu : usr=0.00%, sys=0.42%, ctx=73, majf=0, minf=17409 00:17:48.148 IO depths : 1=1.5%, 2=2.9%, 4=5.9%, 8=11.8%, 16=23.5%, 32=47.1%, >=64=7.4% 00:17:48.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.148 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:48.148 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.148 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.148 job1: (groupid=0, jobs=1): err= 0: pid=3431172: Tue Oct 8 18:22:58 2024 00:17:48.148 read: IOPS=47, BW=47.9MiB/s (50.2MB/s)(610MiB/12738msec) 00:17:48.148 slat (usec): min=50, max=2136.1k, avg=17409.82, stdev=147540.88 00:17:48.148 clat (msec): min=241, max=8829, avg=2449.77, stdev=3179.63 00:17:48.148 lat (msec): min=246, max=8833, avg=2467.18, stdev=3186.99 00:17:48.148 clat percentiles (msec): 00:17:48.148 | 1.00th=[ 284], 5.00th=[ 384], 10.00th=[ 550], 20.00th=[ 760], 00:17:48.148 | 30.00th=[ 785], 40.00th=[ 810], 50.00th=[ 827], 60.00th=[ 835], 00:17:48.148 | 70.00th=[ 1036], 80.00th=[ 6745], 90.00th=[ 8658], 95.00th=[ 8792], 00:17:48.148 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:17:48.148 | 99.99th=[ 8792] 00:17:48.148 bw ( KiB/s): min= 2052, max=221184, per=5.01%, avg=109907.78, stdev=83651.16, samples=9 00:17:48.148 iops : min= 2, max= 216, avg=107.11, stdev=82.01, samples=9 00:17:48.148 lat (msec) : 250=0.49%, 500=7.70%, 750=9.18%, 1000=51.80%, 2000=9.18% 00:17:48.148 lat (msec) : >=2000=21.64% 00:17:48.148 cpu : usr=0.03%, sys=1.41%, ctx=907, majf=0, minf=32769 00:17:48.148 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.2%, >=64=89.7% 00:17:48.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.148 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:48.148 issued rwts: total=610,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.148 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.148 job1: (groupid=0, jobs=1): err= 0: pid=3431173: Tue Oct 8 18:22:58 2024 00:17:48.148 read: IOPS=72, BW=72.9MiB/s (76.4MB/s)(732MiB/10047msec) 00:17:48.148 slat (usec): min=49, max=2169.3k, avg=13654.04, stdev=133780.51 00:17:48.148 clat (msec): min=45, max=7090, avg=1689.63, stdev=2375.08 00:17:48.148 lat (msec): min=46, max=7090, avg=1703.29, stdev=2383.34 00:17:48.148 clat percentiles (msec): 00:17:48.148 | 1.00th=[ 63], 5.00th=[ 146], 10.00th=[ 275], 20.00th=[ 397], 00:17:48.148 | 30.00th=[ 493], 40.00th=[ 617], 50.00th=[ 709], 60.00th=[ 751], 00:17:48.148 | 70.00th=[ 793], 80.00th=[ 860], 90.00th=[ 7013], 95.00th=[ 7080], 00:17:48.148 | 99.00th=[ 7080], 99.50th=[ 7080], 99.90th=[ 7080], 99.95th=[ 7080], 00:17:48.148 | 99.99th=[ 7080] 00:17:48.148 bw ( KiB/s): min= 8192, max=327680, per=6.28%, avg=137671.11, stdev=103660.86, samples=9 00:17:48.148 iops : min= 8, max= 320, avg=134.44, stdev=101.23, samples=9 00:17:48.148 lat (msec) : 50=0.41%, 100=2.46%, 250=6.56%, 500=20.77%, 750=30.19% 00:17:48.148 lat (msec) : 1000=19.95%, >=2000=19.67% 00:17:48.148 cpu : usr=0.06%, sys=2.20%, ctx=637, majf=0, minf=32769 00:17:48.148 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.4% 00:17:48.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.148 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:48.148 issued rwts: total=732,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.148 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.148 job1: (groupid=0, jobs=1): err= 0: pid=3431174: Tue Oct 8 18:22:58 2024 00:17:48.148 read: IOPS=0, BW=278KiB/s (284kB/s)(4096KiB/14748msec) 00:17:48.148 slat (msec): min=13, max=6270, avg=2637.63, stdev=3135.05 00:17:48.148 clat (msec): min=4197, max=8476, avg=7393.88, stdev=2131.21 00:17:48.148 lat (msec): min=8438, max=14747, avg=10031.52, stdev=3144.16 00:17:48.148 clat percentiles (msec): 00:17:48.148 | 1.00th=[ 4212], 5.00th=[ 4212], 10.00th=[ 4212], 20.00th=[ 4212], 00:17:48.148 | 30.00th=[ 8423], 40.00th=[ 8423], 50.00th=[ 8423], 60.00th=[ 8490], 00:17:48.148 | 70.00th=[ 8490], 80.00th=[ 8490], 90.00th=[ 8490], 95.00th=[ 8490], 00:17:48.148 | 99.00th=[ 8490], 99.50th=[ 8490], 99.90th=[ 8490], 99.95th=[ 8490], 00:17:48.148 | 99.99th=[ 8490] 00:17:48.148 lat (msec) : >=2000=100.00% 00:17:48.148 cpu : usr=0.00%, sys=0.02%, ctx=7, majf=0, minf=1025 00:17:48.148 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:48.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.148 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.148 issued rwts: total=4,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.148 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.148 job1: (groupid=0, jobs=1): err= 0: pid=3431175: Tue Oct 8 18:22:58 2024 00:17:48.148 read: IOPS=11, BW=11.2MiB/s (11.7MB/s)(143MiB/12775msec) 00:17:48.148 slat (usec): min=806, max=4256.3k, avg=74204.15, stdev=502566.55 00:17:48.148 clat (msec): min=1192, max=12484, avg=10711.91, stdev=3202.64 00:17:48.148 lat (msec): min=1196, max=12486, avg=10786.11, stdev=3118.10 00:17:48.148 clat percentiles (msec): 00:17:48.148 | 1.00th=[ 1200], 5.00th=[ 1318], 10.00th=[ 6074], 20.00th=[11476], 00:17:48.148 | 30.00th=[11610], 40.00th=[11745], 50.00th=[11879], 60.00th=[12013], 00:17:48.148 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12416], 95.00th=[12416], 00:17:48.148 | 99.00th=[12416], 99.50th=[12550], 99.90th=[12550], 99.95th=[12550], 00:17:48.148 | 99.99th=[12550] 00:17:48.148 bw ( KiB/s): min= 2052, max=16384, per=0.37%, avg=8193.00, stdev=6475.08, samples=4 00:17:48.148 iops : min= 2, max= 16, avg= 8.00, stdev= 6.32, samples=4 00:17:48.148 lat (msec) : 2000=9.09%, >=2000=90.91% 00:17:48.148 cpu : usr=0.01%, sys=0.98%, ctx=193, majf=0, minf=32769 00:17:48.148 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=5.6%, 16=11.2%, 32=22.4%, >=64=55.9% 00:17:48.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.148 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=5.9% 00:17:48.148 issued rwts: total=143,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.148 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.148 job1: (groupid=0, jobs=1): err= 0: pid=3431176: Tue Oct 8 18:22:58 2024 00:17:48.148 read: IOPS=2, BW=2999KiB/s (3071kB/s)(37.0MiB/12635msec) 00:17:48.148 slat (usec): min=988, max=2131.4k, avg=283999.89, stdev=705524.76 00:17:48.148 clat (msec): min=2126, max=12632, avg=9676.83, stdev=3359.17 00:17:48.148 lat (msec): min=4186, max=12634, avg=9960.83, stdev=3140.18 00:17:48.148 clat percentiles (msec): 00:17:48.148 | 1.00th=[ 2123], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6342], 00:17:48.148 | 30.00th=[ 8490], 40.00th=[ 8490], 50.00th=[10671], 60.00th=[12550], 00:17:48.148 | 70.00th=[12550], 80.00th=[12684], 90.00th=[12684], 95.00th=[12684], 00:17:48.148 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:17:48.148 | 99.99th=[12684] 00:17:48.148 lat (msec) : >=2000=100.00% 00:17:48.148 cpu : usr=0.01%, sys=0.32%, ctx=31, majf=0, minf=9473 00:17:48.148 IO depths : 1=2.7%, 2=5.4%, 4=10.8%, 8=21.6%, 16=43.2%, 32=16.2%, >=64=0.0% 00:17:48.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.148 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:48.148 issued rwts: total=37,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.148 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.148 job1: (groupid=0, jobs=1): err= 0: pid=3431177: Tue Oct 8 18:22:58 2024 00:17:48.148 read: IOPS=2, BW=2243KiB/s (2296kB/s)(28.0MiB/12785msec) 00:17:48.148 slat (usec): min=1090, max=2116.1k, avg=378737.67, stdev=794600.45 00:17:48.148 clat (msec): min=2179, max=12776, avg=9219.55, stdev=3330.65 00:17:48.148 lat (msec): min=4289, max=12784, avg=9598.29, stdev=3095.13 00:17:48.148 clat percentiles (msec): 00:17:48.148 | 1.00th=[ 2165], 5.00th=[ 4279], 10.00th=[ 4329], 20.00th=[ 6409], 00:17:48.148 | 30.00th=[ 6477], 40.00th=[ 8557], 50.00th=[ 8658], 60.00th=[10805], 00:17:48.148 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12684], 95.00th=[12818], 00:17:48.148 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:17:48.148 | 99.99th=[12818] 00:17:48.148 lat (msec) : >=2000=100.00% 00:17:48.148 cpu : usr=0.00%, sys=0.25%, ctx=39, majf=0, minf=7169 00:17:48.148 IO depths : 1=3.6%, 2=7.1%, 4=14.3%, 8=28.6%, 16=46.4%, 32=0.0%, >=64=0.0% 00:17:48.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.149 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:17:48.149 issued rwts: total=28,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.149 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.149 job1: (groupid=0, jobs=1): err= 0: pid=3431178: Tue Oct 8 18:22:58 2024 00:17:48.149 read: IOPS=4, BW=4855KiB/s (4971kB/s)(61.0MiB/12866msec) 00:17:48.149 slat (usec): min=872, max=2122.5k, avg=175160.55, stdev=561810.52 00:17:48.149 clat (msec): min=2180, max=12863, avg=11695.95, stdev=2389.92 00:17:48.149 lat (msec): min=4289, max=12865, avg=11871.11, stdev=2047.96 00:17:48.149 clat percentiles (msec): 00:17:48.149 | 1.00th=[ 2165], 5.00th=[ 6409], 10.00th=[ 8557], 20.00th=[10805], 00:17:48.149 | 30.00th=[12684], 40.00th=[12684], 50.00th=[12684], 60.00th=[12818], 00:17:48.149 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:17:48.149 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:17:48.149 | 99.99th=[12818] 00:17:48.149 lat (msec) : >=2000=100.00% 00:17:48.149 cpu : usr=0.00%, sys=0.59%, ctx=62, majf=0, minf=15617 00:17:48.149 IO depths : 1=1.6%, 2=3.3%, 4=6.6%, 8=13.1%, 16=26.2%, 32=49.2%, >=64=0.0% 00:17:48.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.149 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:48.149 issued rwts: total=61,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.149 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.149 job1: (groupid=0, jobs=1): err= 0: pid=3431179: Tue Oct 8 18:22:58 2024 00:17:48.149 read: IOPS=147, BW=148MiB/s (155MB/s)(1894MiB/12811msec) 00:17:48.149 slat (usec): min=42, max=2245.3k, avg=5614.73, stdev=85966.38 00:17:48.149 clat (msec): min=115, max=8816, avg=831.91, stdev=2106.58 00:17:48.149 lat (msec): min=115, max=8817, avg=837.53, stdev=2113.87 00:17:48.149 clat percentiles (msec): 00:17:48.149 | 1.00th=[ 125], 5.00th=[ 133], 10.00th=[ 138], 20.00th=[ 144], 00:17:48.149 | 30.00th=[ 144], 40.00th=[ 146], 50.00th=[ 148], 60.00th=[ 211], 00:17:48.149 | 70.00th=[ 397], 80.00th=[ 518], 90.00th=[ 625], 95.00th=[ 8658], 00:17:48.149 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:17:48.149 | 99.99th=[ 8792] 00:17:48.149 bw ( KiB/s): min= 1410, max=921394, per=15.01%, avg=329157.91, stdev=335116.87, samples=11 00:17:48.149 iops : min= 1, max= 899, avg=321.18, stdev=327.28, samples=11 00:17:48.149 lat (msec) : 250=61.77%, 500=15.58%, 750=15.63%, >=2000=7.02% 00:17:48.149 cpu : usr=0.05%, sys=2.04%, ctx=1620, majf=0, minf=32769 00:17:48.149 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:17:48.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.149 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:48.149 issued rwts: total=1894,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.149 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.149 job1: (groupid=0, jobs=1): err= 0: pid=3431180: Tue Oct 8 18:22:58 2024 00:17:48.149 read: IOPS=42, BW=42.7MiB/s (44.7MB/s)(542MiB/12703msec) 00:17:48.149 slat (usec): min=46, max=2136.1k, avg=19531.10, stdev=165920.62 00:17:48.149 clat (msec): min=411, max=12690, avg=2468.43, stdev=3293.95 00:17:48.149 lat (msec): min=425, max=12692, avg=2487.96, stdev=3311.18 00:17:48.149 clat percentiles (msec): 00:17:48.149 | 1.00th=[ 426], 5.00th=[ 460], 10.00th=[ 485], 20.00th=[ 518], 00:17:48.149 | 30.00th=[ 558], 40.00th=[ 609], 50.00th=[ 802], 60.00th=[ 844], 00:17:48.149 | 70.00th=[ 852], 80.00th=[ 7819], 90.00th=[ 7953], 95.00th=[ 8087], 00:17:48.149 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:17:48.149 | 99.99th=[12684] 00:17:48.149 bw ( KiB/s): min= 2048, max=239616, per=4.31%, avg=94436.00, stdev=104444.77, samples=9 00:17:48.149 iops : min= 2, max= 234, avg=92.22, stdev=102.00, samples=9 00:17:48.149 lat (msec) : 500=16.61%, 750=32.29%, 1000=25.83%, 2000=0.37%, >=2000=24.91% 00:17:48.149 cpu : usr=0.01%, sys=1.21%, ctx=750, majf=0, minf=32769 00:17:48.149 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=3.0%, 32=5.9%, >=64=88.4% 00:17:48.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.149 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:48.149 issued rwts: total=542,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.149 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.149 job1: (groupid=0, jobs=1): err= 0: pid=3431181: Tue Oct 8 18:22:58 2024 00:17:48.149 read: IOPS=4, BW=4969KiB/s (5088kB/s)(72.0MiB/14839msec) 00:17:48.149 slat (usec): min=793, max=4212.2k, avg=147797.86, stdev=727074.15 00:17:48.149 clat (msec): min=4197, max=14835, avg=9697.06, stdev=2705.71 00:17:48.149 lat (msec): min=8305, max=14838, avg=9844.86, stdev=2691.65 00:17:48.149 clat percentiles (msec): 00:17:48.149 | 1.00th=[ 4212], 5.00th=[ 8288], 10.00th=[ 8288], 20.00th=[ 8356], 00:17:48.149 | 30.00th=[ 8356], 40.00th=[ 8356], 50.00th=[ 8356], 60.00th=[ 8490], 00:17:48.149 | 70.00th=[ 8490], 80.00th=[14832], 90.00th=[14832], 95.00th=[14832], 00:17:48.149 | 99.00th=[14832], 99.50th=[14832], 99.90th=[14832], 99.95th=[14832], 00:17:48.149 | 99.99th=[14832] 00:17:48.149 lat (msec) : >=2000=100.00% 00:17:48.149 cpu : usr=0.00%, sys=0.47%, ctx=33, majf=0, minf=18433 00:17:48.149 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.1%, 16=22.2%, 32=44.4%, >=64=12.5% 00:17:48.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.149 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:48.149 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.149 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.149 job2: (groupid=0, jobs=1): err= 0: pid=3431192: Tue Oct 8 18:22:58 2024 00:17:48.149 read: IOPS=1, BW=1607KiB/s (1645kB/s)(20.0MiB/12746msec) 00:17:48.149 slat (usec): min=1423, max=2152.7k, avg=528344.64, stdev=905755.85 00:17:48.149 clat (msec): min=2178, max=12743, avg=9384.08, stdev=3429.16 00:17:48.149 lat (msec): min=4272, max=12745, avg=9912.43, stdev=3054.09 00:17:48.149 clat percentiles (msec): 00:17:48.149 | 1.00th=[ 2165], 5.00th=[ 2165], 10.00th=[ 4279], 20.00th=[ 6477], 00:17:48.149 | 30.00th=[ 6477], 40.00th=[ 8557], 50.00th=[ 8658], 60.00th=[10805], 00:17:48.149 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12684], 95.00th=[12684], 00:17:48.149 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:17:48.149 | 99.99th=[12684] 00:17:48.149 lat (msec) : >=2000=100.00% 00:17:48.149 cpu : usr=0.01%, sys=0.18%, ctx=38, majf=0, minf=5121 00:17:48.149 IO depths : 1=5.0%, 2=10.0%, 4=20.0%, 8=40.0%, 16=25.0%, 32=0.0%, >=64=0.0% 00:17:48.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.149 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:17:48.149 issued rwts: total=20,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.149 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.149 job2: (groupid=0, jobs=1): err= 0: pid=3431194: Tue Oct 8 18:22:58 2024 00:17:48.149 read: IOPS=12, BW=12.6MiB/s (13.2MB/s)(161MiB/12786msec) 00:17:48.149 slat (usec): min=662, max=2161.4k, avg=66287.45, stdev=325992.08 00:17:48.149 clat (msec): min=1670, max=12557, avg=9553.29, stdev=3727.73 00:17:48.149 lat (msec): min=1671, max=12644, avg=9619.58, stdev=3687.74 00:17:48.149 clat percentiles (msec): 00:17:48.149 | 1.00th=[ 1703], 5.00th=[ 1787], 10.00th=[ 1955], 20.00th=[ 6342], 00:17:48.149 | 30.00th=[10671], 40.00th=[11073], 50.00th=[11342], 60.00th=[11610], 00:17:48.149 | 70.00th=[11879], 80.00th=[12147], 90.00th=[12416], 95.00th=[12550], 00:17:48.149 | 99.00th=[12550], 99.50th=[12550], 99.90th=[12550], 99.95th=[12550], 00:17:48.149 | 99.99th=[12550] 00:17:48.149 bw ( KiB/s): min= 2052, max=36864, per=0.53%, avg=11606.00, stdev=12865.45, samples=6 00:17:48.149 iops : min= 2, max= 36, avg=11.33, stdev=12.56, samples=6 00:17:48.149 lat (msec) : 2000=11.80%, >=2000=88.20% 00:17:48.149 cpu : usr=0.00%, sys=1.08%, ctx=368, majf=0, minf=32516 00:17:48.149 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=5.0%, 16=9.9%, 32=19.9%, >=64=60.9% 00:17:48.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.149 complete : 0=0.0%, 4=97.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.9% 00:17:48.149 issued rwts: total=161,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.149 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.149 job2: (groupid=0, jobs=1): err= 0: pid=3431195: Tue Oct 8 18:22:58 2024 00:17:48.149 read: IOPS=4, BW=4462KiB/s (4569kB/s)(55.0MiB/12622msec) 00:17:48.149 slat (usec): min=981, max=2069.1k, avg=190886.63, stdev=580582.44 00:17:48.149 clat (msec): min=2122, max=12620, avg=8825.69, stdev=3245.31 00:17:48.149 lat (msec): min=4172, max=12621, avg=9016.58, stdev=3151.16 00:17:48.149 clat percentiles (msec): 00:17:48.149 | 1.00th=[ 2123], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6342], 00:17:48.149 | 30.00th=[ 6342], 40.00th=[ 8490], 50.00th=[ 8490], 60.00th=[10671], 00:17:48.149 | 70.00th=[12416], 80.00th=[12550], 90.00th=[12550], 95.00th=[12684], 00:17:48.149 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:17:48.149 | 99.99th=[12684] 00:17:48.149 lat (msec) : >=2000=100.00% 00:17:48.149 cpu : usr=0.00%, sys=0.46%, ctx=43, majf=0, minf=14081 00:17:48.149 IO depths : 1=1.8%, 2=3.6%, 4=7.3%, 8=14.5%, 16=29.1%, 32=43.6%, >=64=0.0% 00:17:48.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.149 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:48.149 issued rwts: total=55,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.149 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.149 job2: (groupid=0, jobs=1): err= 0: pid=3431196: Tue Oct 8 18:22:58 2024 00:17:48.149 read: IOPS=57, BW=57.8MiB/s (60.6MB/s)(743MiB/12853msec) 00:17:48.149 slat (usec): min=47, max=4019.0k, avg=14369.82, stdev=173574.92 00:17:48.149 clat (msec): min=327, max=7000, avg=1290.73, stdev=1690.98 00:17:48.149 lat (msec): min=331, max=7002, avg=1305.10, stdev=1709.24 00:17:48.149 clat percentiles (msec): 00:17:48.149 | 1.00th=[ 330], 5.00th=[ 334], 10.00th=[ 338], 20.00th=[ 347], 00:17:48.149 | 30.00th=[ 418], 40.00th=[ 460], 50.00th=[ 485], 60.00th=[ 542], 00:17:48.149 | 70.00th=[ 625], 80.00th=[ 3608], 90.00th=[ 3977], 95.00th=[ 4144], 00:17:48.149 | 99.00th=[ 7013], 99.50th=[ 7013], 99.90th=[ 7013], 99.95th=[ 7013], 00:17:48.149 | 99.99th=[ 7013] 00:17:48.149 bw ( KiB/s): min= 2052, max=356352, per=8.22%, avg=180224.57, stdev=145181.05, samples=7 00:17:48.149 iops : min= 2, max= 348, avg=176.00, stdev=141.78, samples=7 00:17:48.149 lat (msec) : 500=54.91%, 750=23.55%, 1000=0.27%, >=2000=21.27% 00:17:48.149 cpu : usr=0.02%, sys=1.25%, ctx=1001, majf=0, minf=32769 00:17:48.149 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.2%, 32=4.3%, >=64=91.5% 00:17:48.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.149 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:48.149 issued rwts: total=743,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.149 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.149 job2: (groupid=0, jobs=1): err= 0: pid=3431197: Tue Oct 8 18:22:58 2024 00:17:48.149 read: IOPS=1, BW=1777KiB/s (1819kB/s)(22.0MiB/12679msec) 00:17:48.149 slat (msec): min=4, max=2098, avg=479.03, stdev=861.98 00:17:48.149 clat (msec): min=2139, max=12673, avg=7809.73, stdev=3260.85 00:17:48.150 lat (msec): min=4203, max=12678, avg=8288.76, stdev=3160.76 00:17:48.150 clat percentiles (msec): 00:17:48.150 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 4279], 00:17:48.150 | 30.00th=[ 6342], 40.00th=[ 6409], 50.00th=[ 6477], 60.00th=[ 8490], 00:17:48.150 | 70.00th=[10671], 80.00th=[10671], 90.00th=[12550], 95.00th=[12550], 00:17:48.150 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:17:48.150 | 99.99th=[12684] 00:17:48.150 lat (msec) : >=2000=100.00% 00:17:48.150 cpu : usr=0.00%, sys=0.17%, ctx=56, majf=0, minf=5633 00:17:48.150 IO depths : 1=4.5%, 2=9.1%, 4=18.2%, 8=36.4%, 16=31.8%, 32=0.0%, >=64=0.0% 00:17:48.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.150 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:17:48.150 issued rwts: total=22,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.150 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.150 job2: (groupid=0, jobs=1): err= 0: pid=3431198: Tue Oct 8 18:22:58 2024 00:17:48.150 read: IOPS=6, BW=6198KiB/s (6347kB/s)(77.0MiB/12721msec) 00:17:48.150 slat (usec): min=962, max=2055.3k, avg=137795.70, stdev=493827.48 00:17:48.150 clat (msec): min=2109, max=12715, avg=9715.56, stdev=3226.80 00:17:48.150 lat (msec): min=4152, max=12720, avg=9853.36, stdev=3122.60 00:17:48.150 clat percentiles (msec): 00:17:48.150 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6342], 00:17:48.150 | 30.00th=[ 8423], 40.00th=[ 8490], 50.00th=[10671], 60.00th=[12416], 00:17:48.150 | 70.00th=[12550], 80.00th=[12684], 90.00th=[12684], 95.00th=[12684], 00:17:48.150 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:17:48.150 | 99.99th=[12684] 00:17:48.150 lat (msec) : >=2000=100.00% 00:17:48.150 cpu : usr=0.00%, sys=0.65%, ctx=72, majf=0, minf=19713 00:17:48.150 IO depths : 1=1.3%, 2=2.6%, 4=5.2%, 8=10.4%, 16=20.8%, 32=41.6%, >=64=18.2% 00:17:48.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.150 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:48.150 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.150 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.150 job2: (groupid=0, jobs=1): err= 0: pid=3431199: Tue Oct 8 18:22:58 2024 00:17:48.150 read: IOPS=3, BW=3843KiB/s (3935kB/s)(48.0MiB/12790msec) 00:17:48.150 slat (usec): min=986, max=2113.4k, avg=221551.69, stdev=625584.87 00:17:48.150 clat (msec): min=2154, max=12788, avg=9801.56, stdev=3359.38 00:17:48.150 lat (msec): min=4223, max=12789, avg=10023.11, stdev=3190.80 00:17:48.150 clat percentiles (msec): 00:17:48.150 | 1.00th=[ 2165], 5.00th=[ 4245], 10.00th=[ 4279], 20.00th=[ 6477], 00:17:48.150 | 30.00th=[ 8557], 40.00th=[ 8658], 50.00th=[10671], 60.00th=[12550], 00:17:48.150 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:17:48.150 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:17:48.150 | 99.99th=[12818] 00:17:48.150 lat (msec) : >=2000=100.00% 00:17:48.150 cpu : usr=0.00%, sys=0.41%, ctx=45, majf=0, minf=12289 00:17:48.150 IO depths : 1=2.1%, 2=4.2%, 4=8.3%, 8=16.7%, 16=33.3%, 32=35.4%, >=64=0.0% 00:17:48.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.150 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:48.150 issued rwts: total=48,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.150 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.150 job2: (groupid=0, jobs=1): err= 0: pid=3431200: Tue Oct 8 18:22:58 2024 00:17:48.150 read: IOPS=6, BW=6789KiB/s (6952kB/s)(85.0MiB/12821msec) 00:17:48.150 slat (usec): min=917, max=2060.7k, avg=125494.63, stdev=473250.66 00:17:48.150 clat (msec): min=2153, max=12817, avg=10662.54, stdev=3089.24 00:17:48.150 lat (msec): min=4214, max=12820, avg=10788.03, stdev=2953.13 00:17:48.150 clat percentiles (msec): 00:17:48.150 | 1.00th=[ 2165], 5.00th=[ 4245], 10.00th=[ 6342], 20.00th=[ 6477], 00:17:48.150 | 30.00th=[10671], 40.00th=[12550], 50.00th=[12684], 60.00th=[12684], 00:17:48.150 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:17:48.150 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:17:48.150 | 99.99th=[12818] 00:17:48.150 lat (msec) : >=2000=100.00% 00:17:48.150 cpu : usr=0.00%, sys=0.78%, ctx=79, majf=0, minf=21761 00:17:48.150 IO depths : 1=1.2%, 2=2.4%, 4=4.7%, 8=9.4%, 16=18.8%, 32=37.6%, >=64=25.9% 00:17:48.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.150 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:48.150 issued rwts: total=85,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.150 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.150 job2: (groupid=0, jobs=1): err= 0: pid=3431201: Tue Oct 8 18:22:58 2024 00:17:48.150 read: IOPS=102, BW=103MiB/s (108MB/s)(1033MiB/10043msec) 00:17:48.150 slat (usec): min=37, max=2100.8k, avg=9683.38, stdev=108644.09 00:17:48.150 clat (msec): min=36, max=6960, avg=720.53, stdev=1362.48 00:17:48.150 lat (msec): min=66, max=6964, avg=730.22, stdev=1376.08 00:17:48.150 clat percentiles (msec): 00:17:48.150 | 1.00th=[ 78], 5.00th=[ 211], 10.00th=[ 317], 20.00th=[ 330], 00:17:48.150 | 30.00th=[ 342], 40.00th=[ 372], 50.00th=[ 388], 60.00th=[ 405], 00:17:48.150 | 70.00th=[ 502], 80.00th=[ 575], 90.00th=[ 659], 95.00th=[ 776], 00:17:48.150 | 99.00th=[ 6946], 99.50th=[ 6946], 99.90th=[ 6946], 99.95th=[ 6946], 00:17:48.150 | 99.99th=[ 6946] 00:17:48.150 bw ( KiB/s): min=47009, max=391168, per=12.09%, avg=265056.14, stdev=127348.69, samples=7 00:17:48.150 iops : min= 45, max= 382, avg=258.71, stdev=124.62, samples=7 00:17:48.150 lat (msec) : 50=0.10%, 100=1.55%, 250=4.45%, 500=63.89%, 750=24.69% 00:17:48.150 lat (msec) : 1000=0.39%, >=2000=4.94% 00:17:48.150 cpu : usr=0.02%, sys=1.88%, ctx=1229, majf=0, minf=32769 00:17:48.150 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.1%, >=64=93.9% 00:17:48.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.150 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:48.150 issued rwts: total=1033,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.150 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.150 job2: (groupid=0, jobs=1): err= 0: pid=3431202: Tue Oct 8 18:22:58 2024 00:17:48.150 read: IOPS=5, BW=5432KiB/s (5563kB/s)(68.0MiB/12818msec) 00:17:48.150 slat (usec): min=1014, max=2070.3k, avg=156799.71, stdev=528279.67 00:17:48.150 clat (msec): min=2155, max=12815, avg=10317.15, stdev=3195.67 00:17:48.150 lat (msec): min=4222, max=12817, avg=10473.95, stdev=3047.35 00:17:48.150 clat percentiles (msec): 00:17:48.150 | 1.00th=[ 2165], 5.00th=[ 4245], 10.00th=[ 4329], 20.00th=[ 6477], 00:17:48.150 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[12550], 60.00th=[12684], 00:17:48.150 | 70.00th=[12684], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:17:48.150 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:17:48.150 | 99.99th=[12818] 00:17:48.150 lat (msec) : >=2000=100.00% 00:17:48.150 cpu : usr=0.00%, sys=0.63%, ctx=75, majf=0, minf=17409 00:17:48.150 IO depths : 1=1.5%, 2=2.9%, 4=5.9%, 8=11.8%, 16=23.5%, 32=47.1%, >=64=7.4% 00:17:48.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.150 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:48.150 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.150 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.150 job2: (groupid=0, jobs=1): err= 0: pid=3431203: Tue Oct 8 18:22:58 2024 00:17:48.150 read: IOPS=1, BW=1288KiB/s (1319kB/s)(16.0MiB/12716msec) 00:17:48.150 slat (msec): min=10, max=2135, avg=659.23, stdev=972.76 00:17:48.150 clat (msec): min=2167, max=12705, avg=8412.13, stdev=3466.64 00:17:48.150 lat (msec): min=4273, max=12715, avg=9071.36, stdev=3192.06 00:17:48.150 clat percentiles (msec): 00:17:48.150 | 1.00th=[ 2165], 5.00th=[ 2165], 10.00th=[ 4279], 20.00th=[ 4329], 00:17:48.150 | 30.00th=[ 6477], 40.00th=[ 8557], 50.00th=[ 8557], 60.00th=[ 8658], 00:17:48.150 | 70.00th=[10805], 80.00th=[12684], 90.00th=[12684], 95.00th=[12684], 00:17:48.150 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:17:48.150 | 99.99th=[12684] 00:17:48.150 lat (msec) : >=2000=100.00% 00:17:48.150 cpu : usr=0.01%, sys=0.15%, ctx=35, majf=0, minf=4097 00:17:48.150 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:17:48.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.150 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.150 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.150 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.150 job2: (groupid=0, jobs=1): err= 0: pid=3431204: Tue Oct 8 18:22:58 2024 00:17:48.150 read: IOPS=5, BW=5682KiB/s (5818kB/s)(71.0MiB/12796msec) 00:17:48.150 slat (usec): min=1024, max=2066.4k, avg=150185.02, stdev=514435.14 00:17:48.150 clat (msec): min=2131, max=12792, avg=10553.16, stdev=2959.02 00:17:48.150 lat (msec): min=4181, max=12795, avg=10703.34, stdev=2791.36 00:17:48.150 clat percentiles (msec): 00:17:48.150 | 1.00th=[ 2140], 5.00th=[ 4245], 10.00th=[ 6342], 20.00th=[ 8490], 00:17:48.150 | 30.00th=[ 8658], 40.00th=[10671], 50.00th=[12550], 60.00th=[12684], 00:17:48.150 | 70.00th=[12684], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:17:48.150 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:17:48.150 | 99.99th=[12818] 00:17:48.150 lat (msec) : >=2000=100.00% 00:17:48.150 cpu : usr=0.00%, sys=0.66%, ctx=86, majf=0, minf=18177 00:17:48.150 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.3%, 16=22.5%, 32=45.1%, >=64=11.3% 00:17:48.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.150 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:48.151 issued rwts: total=71,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.151 job2: (groupid=0, jobs=1): err= 0: pid=3431205: Tue Oct 8 18:22:58 2024 00:17:48.151 read: IOPS=17, BW=17.7MiB/s (18.5MB/s)(225MiB/12747msec) 00:17:48.151 slat (usec): min=66, max=2133.8k, avg=47258.69, stdev=274910.03 00:17:48.151 clat (msec): min=1078, max=11598, avg=6740.41, stdev=4633.01 00:17:48.151 lat (msec): min=1084, max=11600, avg=6787.67, stdev=4628.28 00:17:48.151 clat percentiles (msec): 00:17:48.151 | 1.00th=[ 1083], 5.00th=[ 1083], 10.00th=[ 1099], 20.00th=[ 1116], 00:17:48.151 | 30.00th=[ 1318], 40.00th=[ 3239], 50.00th=[ 8490], 60.00th=[10805], 00:17:48.151 | 70.00th=[10939], 80.00th=[11208], 90.00th=[11476], 95.00th=[11476], 00:17:48.151 | 99.00th=[11610], 99.50th=[11610], 99.90th=[11610], 99.95th=[11610], 00:17:48.151 | 99.99th=[11610] 00:17:48.151 bw ( KiB/s): min= 2048, max=131072, per=1.14%, avg=25088.50, stdev=43427.09, samples=8 00:17:48.151 iops : min= 2, max= 128, avg=24.50, stdev=42.41, samples=8 00:17:48.151 lat (msec) : 2000=35.56%, >=2000=64.44% 00:17:48.151 cpu : usr=0.00%, sys=1.10%, ctx=339, majf=0, minf=32769 00:17:48.151 IO depths : 1=0.4%, 2=0.9%, 4=1.8%, 8=3.6%, 16=7.1%, 32=14.2%, >=64=72.0% 00:17:48.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.151 complete : 0=0.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.0% 00:17:48.151 issued rwts: total=225,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.151 job3: (groupid=0, jobs=1): err= 0: pid=3431210: Tue Oct 8 18:22:58 2024 00:17:48.151 read: IOPS=2, BW=2990KiB/s (3061kB/s)(37.0MiB/12673msec) 00:17:48.151 slat (usec): min=1023, max=2063.4k, avg=284828.23, stdev=691444.72 00:17:48.151 clat (msec): min=2134, max=12670, avg=9564.69, stdev=3400.16 00:17:48.151 lat (msec): min=4193, max=12672, avg=9849.51, stdev=3195.68 00:17:48.151 clat percentiles (msec): 00:17:48.151 | 1.00th=[ 2140], 5.00th=[ 4178], 10.00th=[ 4245], 20.00th=[ 6409], 00:17:48.151 | 30.00th=[ 8490], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[12684], 00:17:48.151 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12684], 95.00th=[12684], 00:17:48.151 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:17:48.151 | 99.99th=[12684] 00:17:48.151 lat (msec) : >=2000=100.00% 00:17:48.151 cpu : usr=0.00%, sys=0.31%, ctx=48, majf=0, minf=9473 00:17:48.151 IO depths : 1=2.7%, 2=5.4%, 4=10.8%, 8=21.6%, 16=43.2%, 32=16.2%, >=64=0.0% 00:17:48.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.151 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:48.151 issued rwts: total=37,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.151 job3: (groupid=0, jobs=1): err= 0: pid=3431211: Tue Oct 8 18:22:58 2024 00:17:48.151 read: IOPS=72, BW=72.9MiB/s (76.4MB/s)(921MiB/12639msec) 00:17:48.151 slat (usec): min=51, max=2043.7k, avg=11413.37, stdev=116184.69 00:17:48.151 clat (msec): min=390, max=8968, avg=1671.28, stdev=2766.25 00:17:48.151 lat (msec): min=392, max=8971, avg=1682.69, stdev=2775.15 00:17:48.151 clat percentiles (msec): 00:17:48.151 | 1.00th=[ 401], 5.00th=[ 409], 10.00th=[ 414], 20.00th=[ 422], 00:17:48.151 | 30.00th=[ 430], 40.00th=[ 439], 50.00th=[ 481], 60.00th=[ 709], 00:17:48.151 | 70.00th=[ 735], 80.00th=[ 776], 90.00th=[ 8658], 95.00th=[ 8792], 00:17:48.151 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:17:48.151 | 99.99th=[ 8926] 00:17:48.151 bw ( KiB/s): min= 2052, max=311296, per=6.74%, avg=147828.73, stdev=132768.79, samples=11 00:17:48.151 iops : min= 2, max= 304, avg=144.36, stdev=129.66, samples=11 00:17:48.151 lat (msec) : 500=51.68%, 750=20.85%, 1000=12.38%, >=2000=15.09% 00:17:48.151 cpu : usr=0.04%, sys=1.76%, ctx=738, majf=0, minf=32769 00:17:48.151 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.5%, >=64=93.2% 00:17:48.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.151 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:48.151 issued rwts: total=921,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.151 job3: (groupid=0, jobs=1): err= 0: pid=3431212: Tue Oct 8 18:22:58 2024 00:17:48.151 read: IOPS=3, BW=3552KiB/s (3637kB/s)(44.0MiB/12686msec) 00:17:48.151 slat (usec): min=1035, max=2068.8k, avg=239707.34, stdev=638014.99 00:17:48.151 clat (msec): min=2138, max=12682, avg=10037.97, stdev=3300.58 00:17:48.151 lat (msec): min=4206, max=12685, avg=10277.68, stdev=3089.76 00:17:48.151 clat percentiles (msec): 00:17:48.151 | 1.00th=[ 2140], 5.00th=[ 4245], 10.00th=[ 4279], 20.00th=[ 6409], 00:17:48.151 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[12550], 60.00th=[12550], 00:17:48.151 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12684], 95.00th=[12684], 00:17:48.151 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:17:48.151 | 99.99th=[12684] 00:17:48.151 lat (msec) : >=2000=100.00% 00:17:48.151 cpu : usr=0.00%, sys=0.36%, ctx=51, majf=0, minf=11265 00:17:48.151 IO depths : 1=2.3%, 2=4.5%, 4=9.1%, 8=18.2%, 16=36.4%, 32=29.5%, >=64=0.0% 00:17:48.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.151 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:48.151 issued rwts: total=44,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.151 job3: (groupid=0, jobs=1): err= 0: pid=3431213: Tue Oct 8 18:22:58 2024 00:17:48.151 read: IOPS=4, BW=4449KiB/s (4556kB/s)(55.0MiB/12658msec) 00:17:48.151 slat (usec): min=684, max=2044.7k, avg=191452.96, stdev=580222.92 00:17:48.151 clat (msec): min=2127, max=12654, avg=8706.91, stdev=3237.84 00:17:48.151 lat (msec): min=4157, max=12657, avg=8898.36, stdev=3151.78 00:17:48.151 clat percentiles (msec): 00:17:48.151 | 1.00th=[ 2123], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 4279], 00:17:48.151 | 30.00th=[ 6342], 40.00th=[ 8490], 50.00th=[ 8557], 60.00th=[10671], 00:17:48.151 | 70.00th=[10671], 80.00th=[12684], 90.00th=[12684], 95.00th=[12684], 00:17:48.151 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:17:48.151 | 99.99th=[12684] 00:17:48.151 lat (msec) : >=2000=100.00% 00:17:48.151 cpu : usr=0.00%, sys=0.32%, ctx=57, majf=0, minf=14081 00:17:48.151 IO depths : 1=1.8%, 2=3.6%, 4=7.3%, 8=14.5%, 16=29.1%, 32=43.6%, >=64=0.0% 00:17:48.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.151 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:48.151 issued rwts: total=55,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.151 job3: (groupid=0, jobs=1): err= 0: pid=3431214: Tue Oct 8 18:22:58 2024 00:17:48.151 read: IOPS=30, BW=30.7MiB/s (32.1MB/s)(392MiB/12785msec) 00:17:48.151 slat (usec): min=54, max=2047.6k, avg=27148.48, stdev=196161.38 00:17:48.151 clat (msec): min=337, max=7795, avg=3970.08, stdev=2712.80 00:17:48.151 lat (msec): min=339, max=8479, avg=3997.22, stdev=2726.53 00:17:48.151 clat percentiles (msec): 00:17:48.151 | 1.00th=[ 338], 5.00th=[ 351], 10.00th=[ 359], 20.00th=[ 1083], 00:17:48.151 | 30.00th=[ 1183], 40.00th=[ 3708], 50.00th=[ 3809], 60.00th=[ 3943], 00:17:48.151 | 70.00th=[ 6342], 80.00th=[ 7550], 90.00th=[ 7617], 95.00th=[ 7684], 00:17:48.151 | 99.00th=[ 7752], 99.50th=[ 7819], 99.90th=[ 7819], 99.95th=[ 7819], 00:17:48.151 | 99.99th=[ 7819] 00:17:48.151 bw ( KiB/s): min= 2052, max=292864, per=3.09%, avg=67840.50, stdev=99974.38, samples=8 00:17:48.151 iops : min= 2, max= 286, avg=66.25, stdev=97.63, samples=8 00:17:48.151 lat (msec) : 500=17.35%, 1000=0.77%, 2000=14.80%, >=2000=67.09% 00:17:48.151 cpu : usr=0.00%, sys=1.25%, ctx=384, majf=0, minf=32769 00:17:48.151 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.1%, 32=8.2%, >=64=83.9% 00:17:48.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.151 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:17:48.151 issued rwts: total=392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.151 job3: (groupid=0, jobs=1): err= 0: pid=3431215: Tue Oct 8 18:22:58 2024 00:17:48.151 read: IOPS=78, BW=78.1MiB/s (81.9MB/s)(985MiB/12609msec) 00:17:48.151 slat (usec): min=39, max=2035.7k, avg=10648.15, stdev=106885.26 00:17:48.151 clat (msec): min=382, max=8431, avg=1528.22, stdev=1916.52 00:17:48.151 lat (msec): min=386, max=8433, avg=1538.87, stdev=1927.75 00:17:48.151 clat percentiles (msec): 00:17:48.151 | 1.00th=[ 401], 5.00th=[ 405], 10.00th=[ 418], 20.00th=[ 426], 00:17:48.151 | 30.00th=[ 435], 40.00th=[ 447], 50.00th=[ 523], 60.00th=[ 760], 00:17:48.151 | 70.00th=[ 844], 80.00th=[ 2635], 90.00th=[ 5940], 95.00th=[ 6141], 00:17:48.151 | 99.00th=[ 6409], 99.50th=[ 6409], 99.90th=[ 8423], 99.95th=[ 8423], 00:17:48.151 | 99.99th=[ 8423] 00:17:48.151 bw ( KiB/s): min= 2052, max=307200, per=7.29%, avg=159741.36, stdev=118907.44, samples=11 00:17:48.151 iops : min= 2, max= 300, avg=155.91, stdev=116.24, samples=11 00:17:48.151 lat (msec) : 500=49.34%, 750=10.25%, 1000=12.69%, 2000=2.23%, >=2000=25.48% 00:17:48.151 cpu : usr=0.00%, sys=1.55%, ctx=850, majf=0, minf=32769 00:17:48.151 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.6% 00:17:48.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.151 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:48.151 issued rwts: total=985,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.151 job3: (groupid=0, jobs=1): err= 0: pid=3431216: Tue Oct 8 18:22:58 2024 00:17:48.152 read: IOPS=3, BW=3792KiB/s (3883kB/s)(47.0MiB/12693msec) 00:17:48.152 slat (usec): min=998, max=2044.4k, avg=224410.46, stdev=614200.57 00:17:48.152 clat (msec): min=2144, max=12689, avg=8906.61, stdev=3365.68 00:17:48.152 lat (msec): min=4186, max=12692, avg=9131.02, stdev=3254.83 00:17:48.152 clat percentiles (msec): 00:17:48.152 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 4279], 00:17:48.152 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[ 8557], 60.00th=[10671], 00:17:48.152 | 70.00th=[12550], 80.00th=[12550], 90.00th=[12550], 95.00th=[12684], 00:17:48.152 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:17:48.152 | 99.99th=[12684] 00:17:48.152 lat (msec) : >=2000=100.00% 00:17:48.152 cpu : usr=0.01%, sys=0.38%, ctx=51, majf=0, minf=12033 00:17:48.152 IO depths : 1=2.1%, 2=4.3%, 4=8.5%, 8=17.0%, 16=34.0%, 32=34.0%, >=64=0.0% 00:17:48.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.152 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:48.152 issued rwts: total=47,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.152 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.152 job3: (groupid=0, jobs=1): err= 0: pid=3431217: Tue Oct 8 18:22:58 2024 00:17:48.152 read: IOPS=2, BW=2807KiB/s (2874kB/s)(35.0MiB/12770msec) 00:17:48.152 slat (usec): min=1008, max=2067.4k, avg=303879.52, stdev=716848.32 00:17:48.152 clat (msec): min=2133, max=12768, avg=10020.50, stdev=3463.65 00:17:48.152 lat (msec): min=4201, max=12769, avg=10324.38, stdev=3208.55 00:17:48.152 clat percentiles (msec): 00:17:48.152 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 4279], 20.00th=[ 6409], 00:17:48.152 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[12684], 60.00th=[12684], 00:17:48.152 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12818], 95.00th=[12818], 00:17:48.152 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:17:48.152 | 99.99th=[12818] 00:17:48.152 lat (msec) : >=2000=100.00% 00:17:48.152 cpu : usr=0.00%, sys=0.31%, ctx=62, majf=0, minf=8961 00:17:48.152 IO depths : 1=2.9%, 2=5.7%, 4=11.4%, 8=22.9%, 16=45.7%, 32=11.4%, >=64=0.0% 00:17:48.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.152 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:48.152 issued rwts: total=35,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.152 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.152 job3: (groupid=0, jobs=1): err= 0: pid=3431218: Tue Oct 8 18:22:58 2024 00:17:48.152 read: IOPS=182, BW=183MiB/s (191MB/s)(2336MiB/12798msec) 00:17:48.152 slat (usec): min=45, max=2170.5k, avg=4564.36, stdev=75090.92 00:17:48.152 clat (msec): min=119, max=8728, avg=678.32, stdev=1857.83 00:17:48.152 lat (msec): min=120, max=8729, avg=682.89, stdev=1864.77 00:17:48.152 clat percentiles (msec): 00:17:48.152 | 1.00th=[ 122], 5.00th=[ 123], 10.00th=[ 124], 20.00th=[ 128], 00:17:48.152 | 30.00th=[ 130], 40.00th=[ 131], 50.00th=[ 131], 60.00th=[ 132], 00:17:48.152 | 70.00th=[ 133], 80.00th=[ 575], 90.00th=[ 609], 95.00th=[ 6342], 00:17:48.152 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[ 8792], 99.95th=[ 8792], 00:17:48.152 | 99.99th=[ 8792] 00:17:48.152 bw ( KiB/s): min= 2048, max=1028096, per=17.20%, avg=376999.92, stdev=412984.08, samples=12 00:17:48.152 iops : min= 2, max= 1004, avg=368.08, stdev=403.38, samples=12 00:17:48.152 lat (msec) : 250=75.30%, 500=2.65%, 750=15.54%, 1000=0.30%, >=2000=6.21% 00:17:48.152 cpu : usr=0.09%, sys=2.52%, ctx=2135, majf=0, minf=32769 00:17:48.152 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:17:48.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.152 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:48.152 issued rwts: total=2336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.152 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.152 job3: (groupid=0, jobs=1): err= 0: pid=3431219: Tue Oct 8 18:22:58 2024 00:17:48.152 read: IOPS=4, BW=4184KiB/s (4284kB/s)(52.0MiB/12728msec) 00:17:48.152 slat (usec): min=475, max=2052.0k, avg=203719.34, stdev=594550.21 00:17:48.152 clat (msec): min=2134, max=12726, avg=8787.43, stdev=3322.60 00:17:48.152 lat (msec): min=4183, max=12727, avg=8991.15, stdev=3230.14 00:17:48.152 clat percentiles (msec): 00:17:48.152 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 4329], 00:17:48.152 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[ 8557], 60.00th=[10671], 00:17:48.152 | 70.00th=[12550], 80.00th=[12684], 90.00th=[12684], 95.00th=[12684], 00:17:48.152 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:17:48.152 | 99.99th=[12684] 00:17:48.152 lat (msec) : >=2000=100.00% 00:17:48.152 cpu : usr=0.01%, sys=0.42%, ctx=51, majf=0, minf=13313 00:17:48.152 IO depths : 1=1.9%, 2=3.8%, 4=7.7%, 8=15.4%, 16=30.8%, 32=40.4%, >=64=0.0% 00:17:48.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.152 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:48.152 issued rwts: total=52,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.152 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.152 job3: (groupid=0, jobs=1): err= 0: pid=3431220: Tue Oct 8 18:22:58 2024 00:17:48.152 read: IOPS=7, BW=7294KiB/s (7469kB/s)(91.0MiB/12775msec) 00:17:48.152 slat (usec): min=989, max=2044.3k, avg=116747.83, stdev=451246.68 00:17:48.152 clat (msec): min=2149, max=12770, avg=10232.34, stdev=3234.39 00:17:48.152 lat (msec): min=4167, max=12774, avg=10349.09, stdev=3129.44 00:17:48.152 clat percentiles (msec): 00:17:48.152 | 1.00th=[ 2165], 5.00th=[ 4212], 10.00th=[ 4279], 20.00th=[ 6409], 00:17:48.152 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[12550], 60.00th=[12550], 00:17:48.152 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12818], 95.00th=[12818], 00:17:48.152 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:17:48.152 | 99.99th=[12818] 00:17:48.152 lat (msec) : >=2000=100.00% 00:17:48.152 cpu : usr=0.00%, sys=0.81%, ctx=98, majf=0, minf=23297 00:17:48.152 IO depths : 1=1.1%, 2=2.2%, 4=4.4%, 8=8.8%, 16=17.6%, 32=35.2%, >=64=30.8% 00:17:48.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.152 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:48.152 issued rwts: total=91,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.152 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.152 job3: (groupid=0, jobs=1): err= 0: pid=3431221: Tue Oct 8 18:22:58 2024 00:17:48.152 read: IOPS=2, BW=2585KiB/s (2647kB/s)(32.0MiB/12677msec) 00:17:48.152 slat (usec): min=1024, max=2077.0k, avg=329115.34, stdev=738930.74 00:17:48.152 clat (msec): min=2144, max=12674, avg=10249.26, stdev=3271.48 00:17:48.152 lat (msec): min=4221, max=12676, avg=10578.37, stdev=2943.13 00:17:48.152 clat percentiles (msec): 00:17:48.152 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 4329], 20.00th=[ 6477], 00:17:48.152 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[12550], 60.00th=[12684], 00:17:48.152 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12684], 95.00th=[12684], 00:17:48.152 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:17:48.152 | 99.99th=[12684] 00:17:48.152 lat (msec) : >=2000=100.00% 00:17:48.152 cpu : usr=0.00%, sys=0.28%, ctx=45, majf=0, minf=8193 00:17:48.152 IO depths : 1=3.1%, 2=6.2%, 4=12.5%, 8=25.0%, 16=50.0%, 32=3.1%, >=64=0.0% 00:17:48.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.152 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:17:48.152 issued rwts: total=32,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.152 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.152 job3: (groupid=0, jobs=1): err= 0: pid=3431222: Tue Oct 8 18:22:58 2024 00:17:48.152 read: IOPS=99, BW=99.2MiB/s (104MB/s)(1263MiB/12729msec) 00:17:48.152 slat (usec): min=71, max=2083.5k, avg=8396.62, stdev=88307.89 00:17:48.152 clat (msec): min=274, max=8562, avg=1239.78, stdev=2089.48 00:17:48.152 lat (msec): min=276, max=8991, avg=1248.17, stdev=2097.37 00:17:48.152 clat percentiles (msec): 00:17:48.152 | 1.00th=[ 279], 5.00th=[ 279], 10.00th=[ 284], 20.00th=[ 313], 00:17:48.152 | 30.00th=[ 330], 40.00th=[ 430], 50.00th=[ 550], 60.00th=[ 693], 00:17:48.152 | 70.00th=[ 760], 80.00th=[ 835], 90.00th=[ 4144], 95.00th=[ 7550], 00:17:48.152 | 99.00th=[ 7684], 99.50th=[ 7684], 99.90th=[ 7684], 99.95th=[ 8557], 00:17:48.152 | 99.99th=[ 8557] 00:17:48.152 bw ( KiB/s): min= 2052, max=446464, per=7.58%, avg=166181.71, stdev=139164.91, samples=14 00:17:48.152 iops : min= 2, max= 436, avg=162.29, stdev=135.90, samples=14 00:17:48.152 lat (msec) : 500=45.92%, 750=22.25%, 1000=19.48%, 2000=1.74%, >=2000=10.61% 00:17:48.152 cpu : usr=0.05%, sys=1.74%, ctx=1119, majf=0, minf=32769 00:17:48.152 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.5%, >=64=95.0% 00:17:48.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.152 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:48.152 issued rwts: total=1263,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.152 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.152 job4: (groupid=0, jobs=1): err= 0: pid=3431236: Tue Oct 8 18:22:58 2024 00:17:48.152 read: IOPS=3, BW=3854KiB/s (3947kB/s)(48.0MiB/12753msec) 00:17:48.152 slat (usec): min=1057, max=2057.3k, avg=221471.61, stdev=622131.82 00:17:48.152 clat (msec): min=2121, max=12750, avg=10072.43, stdev=3420.93 00:17:48.152 lat (msec): min=4161, max=12752, avg=10293.90, stdev=3234.29 00:17:48.152 clat percentiles (msec): 00:17:48.152 | 1.00th=[ 2123], 5.00th=[ 4178], 10.00th=[ 4245], 20.00th=[ 6342], 00:17:48.152 | 30.00th=[ 8490], 40.00th=[10671], 50.00th=[12684], 60.00th=[12684], 00:17:48.152 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12684], 95.00th=[12684], 00:17:48.152 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:17:48.152 | 99.99th=[12818] 00:17:48.152 lat (msec) : >=2000=100.00% 00:17:48.152 cpu : usr=0.00%, sys=0.44%, ctx=72, majf=0, minf=12289 00:17:48.152 IO depths : 1=2.1%, 2=4.2%, 4=8.3%, 8=16.7%, 16=33.3%, 32=35.4%, >=64=0.0% 00:17:48.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.152 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:48.152 issued rwts: total=48,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.152 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.152 job4: (groupid=0, jobs=1): err= 0: pid=3431237: Tue Oct 8 18:22:58 2024 00:17:48.152 read: IOPS=2, BW=2882KiB/s (2951kB/s)(30.0MiB/10660msec) 00:17:48.152 slat (usec): min=1075, max=2076.7k, avg=350973.51, stdev=755796.65 00:17:48.152 clat (msec): min=129, max=10656, avg=8198.30, stdev=3316.48 00:17:48.152 lat (msec): min=2154, max=10659, avg=8549.27, stdev=2972.49 00:17:48.152 clat percentiles (msec): 00:17:48.152 | 1.00th=[ 130], 5.00th=[ 2165], 10.00th=[ 2232], 20.00th=[ 4329], 00:17:48.152 | 30.00th=[ 6544], 40.00th=[ 8658], 50.00th=[10537], 60.00th=[10671], 00:17:48.152 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:17:48.152 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:17:48.152 | 99.99th=[10671] 00:17:48.152 lat (msec) : 250=3.33%, >=2000=96.67% 00:17:48.152 cpu : usr=0.00%, sys=0.32%, ctx=65, majf=0, minf=7681 00:17:48.153 IO depths : 1=3.3%, 2=6.7%, 4=13.3%, 8=26.7%, 16=50.0%, 32=0.0%, >=64=0.0% 00:17:48.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.153 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:17:48.153 issued rwts: total=30,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.153 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.153 job4: (groupid=0, jobs=1): err= 0: pid=3431238: Tue Oct 8 18:22:58 2024 00:17:48.153 read: IOPS=1, BW=1789KiB/s (1832kB/s)(22.0MiB/12589msec) 00:17:48.153 slat (usec): min=1256, max=2155.7k, avg=474808.59, stdev=850711.55 00:17:48.153 clat (msec): min=2142, max=12471, avg=7664.08, stdev=3058.80 00:17:48.153 lat (msec): min=4165, max=12588, avg=8138.89, stdev=2970.41 00:17:48.153 clat percentiles (msec): 00:17:48.153 | 1.00th=[ 2140], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 4245], 00:17:48.153 | 30.00th=[ 6275], 40.00th=[ 6275], 50.00th=[ 8423], 60.00th=[ 8490], 00:17:48.153 | 70.00th=[ 8557], 80.00th=[10671], 90.00th=[12416], 95.00th=[12416], 00:17:48.153 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:17:48.153 | 99.99th=[12416] 00:17:48.153 lat (msec) : >=2000=100.00% 00:17:48.153 cpu : usr=0.00%, sys=0.17%, ctx=63, majf=0, minf=5633 00:17:48.153 IO depths : 1=4.5%, 2=9.1%, 4=18.2%, 8=36.4%, 16=31.8%, 32=0.0%, >=64=0.0% 00:17:48.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.153 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:17:48.153 issued rwts: total=22,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.153 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.153 job4: (groupid=0, jobs=1): err= 0: pid=3431240: Tue Oct 8 18:22:58 2024 00:17:48.153 read: IOPS=4, BW=4936KiB/s (5054kB/s)(61.0MiB/12655msec) 00:17:48.153 slat (usec): min=948, max=2067.0k, avg=172700.14, stdev=547661.80 00:17:48.153 clat (msec): min=2119, max=12653, avg=9585.81, stdev=3035.78 00:17:48.153 lat (msec): min=4149, max=12654, avg=9758.51, stdev=2900.60 00:17:48.153 clat percentiles (msec): 00:17:48.153 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 4279], 20.00th=[ 6409], 00:17:48.153 | 30.00th=[ 8490], 40.00th=[ 8490], 50.00th=[10671], 60.00th=[10671], 00:17:48.153 | 70.00th=[12550], 80.00th=[12684], 90.00th=[12684], 95.00th=[12684], 00:17:48.153 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:17:48.153 | 99.99th=[12684] 00:17:48.153 lat (msec) : >=2000=100.00% 00:17:48.153 cpu : usr=0.00%, sys=0.51%, ctx=60, majf=0, minf=15617 00:17:48.153 IO depths : 1=1.6%, 2=3.3%, 4=6.6%, 8=13.1%, 16=26.2%, 32=49.2%, >=64=0.0% 00:17:48.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.153 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:48.153 issued rwts: total=61,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.153 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.153 job4: (groupid=0, jobs=1): err= 0: pid=3431241: Tue Oct 8 18:22:58 2024 00:17:48.153 read: IOPS=4, BW=4702KiB/s (4815kB/s)(58.0MiB/12631msec) 00:17:48.153 slat (usec): min=582, max=2048.2k, avg=181219.85, stdev=559664.85 00:17:48.153 clat (msec): min=2119, max=12625, avg=8861.37, stdev=3159.45 00:17:48.153 lat (msec): min=4167, max=12630, avg=9042.59, stdev=3066.03 00:17:48.153 clat percentiles (msec): 00:17:48.153 | 1.00th=[ 2123], 5.00th=[ 4245], 10.00th=[ 4279], 20.00th=[ 6342], 00:17:48.153 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[ 8557], 60.00th=[10671], 00:17:48.153 | 70.00th=[12416], 80.00th=[12550], 90.00th=[12550], 95.00th=[12684], 00:17:48.153 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:17:48.153 | 99.99th=[12684] 00:17:48.153 lat (msec) : >=2000=100.00% 00:17:48.153 cpu : usr=0.02%, sys=0.44%, ctx=56, majf=0, minf=14849 00:17:48.153 IO depths : 1=1.7%, 2=3.4%, 4=6.9%, 8=13.8%, 16=27.6%, 32=46.6%, >=64=0.0% 00:17:48.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.153 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:48.153 issued rwts: total=58,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.153 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.153 job4: (groupid=0, jobs=1): err= 0: pid=3431242: Tue Oct 8 18:22:58 2024 00:17:48.153 read: IOPS=2, BW=3070KiB/s (3144kB/s)(38.0MiB/12674msec) 00:17:48.153 slat (usec): min=1043, max=2043.8k, avg=277692.07, stdev=689971.09 00:17:48.153 clat (msec): min=2121, max=12672, avg=9253.19, stdev=3438.94 00:17:48.153 lat (msec): min=4160, max=12673, avg=9530.89, stdev=3269.33 00:17:48.153 clat percentiles (msec): 00:17:48.153 | 1.00th=[ 2123], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6342], 00:17:48.153 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[10671], 00:17:48.153 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12684], 95.00th=[12684], 00:17:48.153 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:17:48.153 | 99.99th=[12684] 00:17:48.153 lat (msec) : >=2000=100.00% 00:17:48.153 cpu : usr=0.00%, sys=0.32%, ctx=57, majf=0, minf=9729 00:17:48.153 IO depths : 1=2.6%, 2=5.3%, 4=10.5%, 8=21.1%, 16=42.1%, 32=18.4%, >=64=0.0% 00:17:48.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.153 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:48.153 issued rwts: total=38,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.153 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.153 job4: (groupid=0, jobs=1): err= 0: pid=3431243: Tue Oct 8 18:22:58 2024 00:17:48.153 read: IOPS=4, BW=4465KiB/s (4572kB/s)(55.0MiB/12613msec) 00:17:48.153 slat (usec): min=942, max=2072.9k, avg=190358.94, stdev=579797.69 00:17:48.153 clat (msec): min=2142, max=10636, avg=6855.68, stdev=1502.91 00:17:48.153 lat (msec): min=4194, max=12612, avg=7046.04, stdev=1557.03 00:17:48.153 clat percentiles (msec): 00:17:48.153 | 1.00th=[ 2140], 5.00th=[ 4279], 10.00th=[ 6208], 20.00th=[ 6208], 00:17:48.153 | 30.00th=[ 6275], 40.00th=[ 6275], 50.00th=[ 6342], 60.00th=[ 6342], 00:17:48.153 | 70.00th=[ 6409], 80.00th=[ 8490], 90.00th=[ 8490], 95.00th=[10537], 00:17:48.153 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:17:48.153 | 99.99th=[10671] 00:17:48.153 lat (msec) : >=2000=100.00% 00:17:48.153 cpu : usr=0.00%, sys=0.44%, ctx=63, majf=0, minf=14081 00:17:48.153 IO depths : 1=1.8%, 2=3.6%, 4=7.3%, 8=14.5%, 16=29.1%, 32=43.6%, >=64=0.0% 00:17:48.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.153 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:48.153 issued rwts: total=55,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.153 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.153 job4: (groupid=0, jobs=1): err= 0: pid=3431244: Tue Oct 8 18:22:58 2024 00:17:48.153 read: IOPS=1, BW=1260KiB/s (1290kB/s)(13.0MiB/10566msec) 00:17:48.153 slat (msec): min=13, max=2093, avg=804.11, stdev=995.05 00:17:48.153 clat (msec): min=111, max=10459, avg=5590.62, stdev=3543.65 00:17:48.153 lat (msec): min=2141, max=10564, avg=6394.73, stdev=3378.93 00:17:48.153 clat percentiles (msec): 00:17:48.153 | 1.00th=[ 112], 5.00th=[ 112], 10.00th=[ 2140], 20.00th=[ 2198], 00:17:48.153 | 30.00th=[ 2232], 40.00th=[ 4396], 50.00th=[ 4396], 60.00th=[ 6477], 00:17:48.153 | 70.00th=[ 8658], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:17:48.153 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:17:48.153 | 99.99th=[10402] 00:17:48.153 lat (msec) : 250=7.69%, >=2000=92.31% 00:17:48.153 cpu : usr=0.00%, sys=0.14%, ctx=62, majf=0, minf=3329 00:17:48.153 IO depths : 1=7.7%, 2=15.4%, 4=30.8%, 8=46.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:48.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.153 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.153 issued rwts: total=13,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.153 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.153 job4: (groupid=0, jobs=1): err= 0: pid=3431245: Tue Oct 8 18:22:58 2024 00:17:48.153 read: IOPS=34, BW=34.6MiB/s (36.3MB/s)(435MiB/12581msec) 00:17:48.153 slat (usec): min=79, max=2107.7k, avg=24034.47, stdev=195142.85 00:17:48.153 clat (msec): min=573, max=11064, avg=3570.68, stdev=4211.40 00:17:48.153 lat (msec): min=574, max=11064, avg=3594.71, stdev=4224.14 00:17:48.153 clat percentiles (msec): 00:17:48.153 | 1.00th=[ 575], 5.00th=[ 575], 10.00th=[ 575], 20.00th=[ 584], 00:17:48.153 | 30.00th=[ 584], 40.00th=[ 600], 50.00th=[ 617], 60.00th=[ 676], 00:17:48.153 | 70.00th=[ 4799], 80.00th=[10671], 90.00th=[10939], 95.00th=[10939], 00:17:48.153 | 99.00th=[11073], 99.50th=[11073], 99.90th=[11073], 99.95th=[11073], 00:17:48.153 | 99.99th=[11073] 00:17:48.153 bw ( KiB/s): min= 2052, max=219136, per=4.11%, avg=90010.14, stdev=90077.24, samples=7 00:17:48.153 iops : min= 2, max= 214, avg=87.57, stdev=87.89, samples=7 00:17:48.153 lat (msec) : 750=61.84%, >=2000=38.16% 00:17:48.153 cpu : usr=0.05%, sys=1.43%, ctx=336, majf=0, minf=32769 00:17:48.153 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.7%, 32=7.4%, >=64=85.5% 00:17:48.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.153 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:17:48.153 issued rwts: total=435,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.153 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.153 job4: (groupid=0, jobs=1): err= 0: pid=3431246: Tue Oct 8 18:22:58 2024 00:17:48.153 read: IOPS=181, BW=182MiB/s (191MB/s)(1945MiB/10696msec) 00:17:48.153 slat (usec): min=43, max=2037.5k, avg=5428.66, stdev=88963.38 00:17:48.153 clat (msec): min=121, max=4556, avg=498.57, stdev=1170.94 00:17:48.153 lat (msec): min=121, max=4557, avg=504.00, stdev=1177.56 00:17:48.153 clat percentiles (msec): 00:17:48.153 | 1.00th=[ 123], 5.00th=[ 124], 10.00th=[ 124], 20.00th=[ 126], 00:17:48.153 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 134], 60.00th=[ 136], 00:17:48.153 | 70.00th=[ 138], 80.00th=[ 144], 90.00th=[ 180], 95.00th=[ 4463], 00:17:48.153 | 99.00th=[ 4530], 99.50th=[ 4530], 99.90th=[ 4530], 99.95th=[ 4530], 00:17:48.153 | 99.99th=[ 4530] 00:17:48.153 bw ( KiB/s): min=10240, max=1009664, per=24.25%, avg=531602.29, stdev=462486.68, samples=7 00:17:48.153 iops : min= 10, max= 986, avg=519.14, stdev=451.65, samples=7 00:17:48.153 lat (msec) : 250=90.44%, 500=0.36%, >=2000=9.20% 00:17:48.153 cpu : usr=0.06%, sys=2.07%, ctx=1852, majf=0, minf=32769 00:17:48.153 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:17:48.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.153 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:48.153 issued rwts: total=1945,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.153 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.153 job4: (groupid=0, jobs=1): err= 0: pid=3431247: Tue Oct 8 18:22:58 2024 00:17:48.153 read: IOPS=4, BW=4635KiB/s (4746kB/s)(57.0MiB/12593msec) 00:17:48.153 slat (usec): min=894, max=2035.6k, avg=183269.36, stdev=557991.34 00:17:48.153 clat (msec): min=2145, max=12586, avg=8822.18, stdev=3256.04 00:17:48.153 lat (msec): min=4154, max=12592, avg=9005.45, stdev=3166.30 00:17:48.153 clat percentiles (msec): 00:17:48.153 | 1.00th=[ 2140], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 4279], 00:17:48.153 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[ 8557], 60.00th=[10671], 00:17:48.153 | 70.00th=[10671], 80.00th=[12550], 90.00th=[12550], 95.00th=[12550], 00:17:48.154 | 99.00th=[12550], 99.50th=[12550], 99.90th=[12550], 99.95th=[12550], 00:17:48.154 | 99.99th=[12550] 00:17:48.154 lat (msec) : >=2000=100.00% 00:17:48.154 cpu : usr=0.00%, sys=0.46%, ctx=45, majf=0, minf=14593 00:17:48.154 IO depths : 1=1.8%, 2=3.5%, 4=7.0%, 8=14.0%, 16=28.1%, 32=45.6%, >=64=0.0% 00:17:48.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.154 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:48.154 issued rwts: total=57,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.154 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.154 job4: (groupid=0, jobs=1): err= 0: pid=3431248: Tue Oct 8 18:22:58 2024 00:17:48.154 read: IOPS=46, BW=46.2MiB/s (48.4MB/s)(582MiB/12596msec) 00:17:48.154 slat (usec): min=50, max=2065.4k, avg=17955.36, stdev=162379.26 00:17:48.154 clat (msec): min=250, max=10723, avg=2686.27, stdev=2883.97 00:17:48.154 lat (msec): min=252, max=10726, avg=2704.23, stdev=2901.09 00:17:48.154 clat percentiles (msec): 00:17:48.154 | 1.00th=[ 264], 5.00th=[ 279], 10.00th=[ 338], 20.00th=[ 460], 00:17:48.154 | 30.00th=[ 642], 40.00th=[ 802], 50.00th=[ 877], 60.00th=[ 2567], 00:17:48.154 | 70.00th=[ 2735], 80.00th=[ 6275], 90.00th=[ 8221], 95.00th=[ 8288], 00:17:48.154 | 99.00th=[ 8490], 99.50th=[10537], 99.90th=[10671], 99.95th=[10671], 00:17:48.154 | 99.99th=[10671] 00:17:48.154 bw ( KiB/s): min= 2048, max=311296, per=4.72%, avg=103535.89, stdev=110095.39, samples=9 00:17:48.154 iops : min= 2, max= 304, avg=101.00, stdev=107.62, samples=9 00:17:48.154 lat (msec) : 500=22.51%, 750=14.26%, 1000=14.78%, 2000=2.58%, >=2000=45.88% 00:17:48.154 cpu : usr=0.02%, sys=1.31%, ctx=451, majf=0, minf=32770 00:17:48.154 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.5%, >=64=89.2% 00:17:48.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.154 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:48.154 issued rwts: total=582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.154 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.154 job4: (groupid=0, jobs=1): err= 0: pid=3431249: Tue Oct 8 18:22:58 2024 00:17:48.154 read: IOPS=4, BW=4241KiB/s (4343kB/s)(44.0MiB/10624msec) 00:17:48.154 slat (usec): min=1044, max=2049.2k, avg=238468.20, stdev=635447.18 00:17:48.154 clat (msec): min=130, max=10622, avg=7871.58, stdev=3406.53 00:17:48.154 lat (msec): min=2153, max=10623, avg=8110.05, stdev=3213.84 00:17:48.154 clat percentiles (msec): 00:17:48.154 | 1.00th=[ 131], 5.00th=[ 2165], 10.00th=[ 2198], 20.00th=[ 4329], 00:17:48.154 | 30.00th=[ 6477], 40.00th=[ 8557], 50.00th=[10402], 60.00th=[10402], 00:17:48.154 | 70.00th=[10537], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:17:48.154 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:17:48.154 | 99.99th=[10671] 00:17:48.154 lat (msec) : 250=2.27%, >=2000=97.73% 00:17:48.154 cpu : usr=0.00%, sys=0.45%, ctx=71, majf=0, minf=11265 00:17:48.154 IO depths : 1=2.3%, 2=4.5%, 4=9.1%, 8=18.2%, 16=36.4%, 32=29.5%, >=64=0.0% 00:17:48.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.154 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:48.154 issued rwts: total=44,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.154 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.154 job5: (groupid=0, jobs=1): err= 0: pid=3431256: Tue Oct 8 18:22:58 2024 00:17:48.154 read: IOPS=6, BW=6974KiB/s (7141kB/s)(72.0MiB/10572msec) 00:17:48.154 slat (usec): min=777, max=2046.8k, avg=144933.05, stdev=502239.18 00:17:48.154 clat (msec): min=136, max=10563, avg=6453.59, stdev=3406.45 00:17:48.154 lat (msec): min=2118, max=10571, avg=6598.52, stdev=3355.49 00:17:48.154 clat percentiles (msec): 00:17:48.154 | 1.00th=[ 136], 5.00th=[ 2123], 10.00th=[ 2265], 20.00th=[ 2265], 00:17:48.154 | 30.00th=[ 2265], 40.00th=[ 6409], 50.00th=[ 6544], 60.00th=[ 8658], 00:17:48.154 | 70.00th=[ 8658], 80.00th=[10537], 90.00th=[10537], 95.00th=[10537], 00:17:48.154 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:17:48.154 | 99.99th=[10537] 00:17:48.154 lat (msec) : 250=1.39%, >=2000=98.61% 00:17:48.154 cpu : usr=0.02%, sys=0.67%, ctx=51, majf=0, minf=18433 00:17:48.154 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.1%, 16=22.2%, 32=44.4%, >=64=12.5% 00:17:48.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.154 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:48.154 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.154 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.154 job5: (groupid=0, jobs=1): err= 0: pid=3431258: Tue Oct 8 18:22:58 2024 00:17:48.154 read: IOPS=175, BW=176MiB/s (184MB/s)(1761MiB/10010msec) 00:17:48.154 slat (usec): min=36, max=2061.3k, avg=5673.89, stdev=75466.55 00:17:48.154 clat (msec): min=8, max=6207, avg=463.59, stdev=724.13 00:17:48.154 lat (msec): min=9, max=6225, avg=469.26, stdev=737.37 00:17:48.154 clat percentiles (msec): 00:17:48.154 | 1.00th=[ 23], 5.00th=[ 111], 10.00th=[ 125], 20.00th=[ 140], 00:17:48.154 | 30.00th=[ 167], 40.00th=[ 236], 50.00th=[ 262], 60.00th=[ 275], 00:17:48.154 | 70.00th=[ 334], 80.00th=[ 477], 90.00th=[ 726], 95.00th=[ 2022], 00:17:48.154 | 99.00th=[ 4597], 99.50th=[ 4665], 99.90th=[ 6208], 99.95th=[ 6208], 00:17:48.154 | 99.99th=[ 6208] 00:17:48.154 bw ( KiB/s): min=26624, max=882970, per=16.00%, avg=350755.25, stdev=267726.89, samples=8 00:17:48.154 iops : min= 26, max= 862, avg=342.50, stdev=261.37, samples=8 00:17:48.154 lat (msec) : 10=0.11%, 20=0.74%, 50=2.04%, 100=1.42%, 250=37.71% 00:17:48.154 lat (msec) : 500=38.78%, 750=9.82%, 1000=1.42%, 2000=2.84%, >=2000=5.11% 00:17:48.154 cpu : usr=0.09%, sys=2.53%, ctx=1926, majf=0, minf=32769 00:17:48.154 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:17:48.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.154 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:48.154 issued rwts: total=1761,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.154 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.154 job5: (groupid=0, jobs=1): err= 0: pid=3431259: Tue Oct 8 18:22:58 2024 00:17:48.154 read: IOPS=7, BW=7719KiB/s (7905kB/s)(81.0MiB/10745msec) 00:17:48.154 slat (usec): min=986, max=2053.8k, avg=131061.72, stdev=478940.46 00:17:48.154 clat (msec): min=128, max=10743, avg=7640.91, stdev=3453.28 00:17:48.154 lat (msec): min=2132, max=10744, avg=7771.97, stdev=3364.92 00:17:48.154 clat percentiles (msec): 00:17:48.154 | 1.00th=[ 129], 5.00th=[ 2198], 10.00th=[ 2232], 20.00th=[ 4329], 00:17:48.154 | 30.00th=[ 6409], 40.00th=[ 6544], 50.00th=[ 8658], 60.00th=[10671], 00:17:48.154 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:17:48.154 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:17:48.154 | 99.99th=[10805] 00:17:48.154 lat (msec) : 250=1.23%, >=2000=98.77% 00:17:48.154 cpu : usr=0.00%, sys=0.87%, ctx=82, majf=0, minf=20737 00:17:48.154 IO depths : 1=1.2%, 2=2.5%, 4=4.9%, 8=9.9%, 16=19.8%, 32=39.5%, >=64=22.2% 00:17:48.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.154 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:48.154 issued rwts: total=81,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.154 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.154 job5: (groupid=0, jobs=1): err= 0: pid=3431260: Tue Oct 8 18:22:58 2024 00:17:48.154 read: IOPS=11, BW=11.3MiB/s (11.8MB/s)(121MiB/10754msec) 00:17:48.154 slat (usec): min=803, max=2029.8k, avg=87791.46, stdev=390092.10 00:17:48.154 clat (msec): min=130, max=10751, avg=7230.11, stdev=3611.91 00:17:48.154 lat (msec): min=2095, max=10753, avg=7317.91, stdev=3566.72 00:17:48.154 clat percentiles (msec): 00:17:48.154 | 1.00th=[ 2089], 5.00th=[ 2123], 10.00th=[ 2123], 20.00th=[ 2265], 00:17:48.154 | 30.00th=[ 4329], 40.00th=[ 6477], 50.00th=[ 8658], 60.00th=[10537], 00:17:48.154 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10805], 00:17:48.154 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:17:48.154 | 99.99th=[10805] 00:17:48.154 lat (msec) : 250=0.83%, >=2000=99.17% 00:17:48.154 cpu : usr=0.00%, sys=1.20%, ctx=114, majf=0, minf=30977 00:17:48.154 IO depths : 1=0.8%, 2=1.7%, 4=3.3%, 8=6.6%, 16=13.2%, 32=26.4%, >=64=47.9% 00:17:48.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.154 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:48.154 issued rwts: total=121,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.154 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.154 job5: (groupid=0, jobs=1): err= 0: pid=3431261: Tue Oct 8 18:22:58 2024 00:17:48.154 read: IOPS=62, BW=62.4MiB/s (65.5MB/s)(671MiB/10746msec) 00:17:48.154 slat (usec): min=65, max=2150.4k, avg=15809.55, stdev=145412.25 00:17:48.154 clat (msec): min=135, max=4823, avg=1729.19, stdev=1612.74 00:17:48.154 lat (msec): min=411, max=4826, avg=1745.00, stdev=1617.67 00:17:48.154 clat percentiles (msec): 00:17:48.154 | 1.00th=[ 409], 5.00th=[ 422], 10.00th=[ 435], 20.00th=[ 468], 00:17:48.154 | 30.00th=[ 489], 40.00th=[ 498], 50.00th=[ 527], 60.00th=[ 2232], 00:17:48.154 | 70.00th=[ 2333], 80.00th=[ 2869], 90.00th=[ 4597], 95.00th=[ 4732], 00:17:48.154 | 99.00th=[ 4799], 99.50th=[ 4799], 99.90th=[ 4799], 99.95th=[ 4799], 00:17:48.154 | 99.99th=[ 4799] 00:17:48.154 bw ( KiB/s): min= 2048, max=294912, per=5.63%, avg=123519.44, stdev=117882.41, samples=9 00:17:48.154 iops : min= 2, max= 288, avg=120.56, stdev=115.07, samples=9 00:17:48.154 lat (msec) : 250=0.15%, 500=41.28%, 750=15.20%, 1000=0.30%, >=2000=43.07% 00:17:48.154 cpu : usr=0.05%, sys=1.60%, ctx=1968, majf=0, minf=32769 00:17:48.154 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.6% 00:17:48.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.154 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:48.154 issued rwts: total=671,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.154 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.154 job5: (groupid=0, jobs=1): err= 0: pid=3431262: Tue Oct 8 18:22:58 2024 00:17:48.154 read: IOPS=55, BW=55.8MiB/s (58.5MB/s)(600MiB/10747msec) 00:17:48.154 slat (usec): min=70, max=2038.6k, avg=17676.32, stdev=161262.44 00:17:48.154 clat (msec): min=137, max=6785, avg=1344.23, stdev=1678.54 00:17:48.154 lat (msec): min=279, max=6787, avg=1361.91, stdev=1692.43 00:17:48.154 clat percentiles (msec): 00:17:48.154 | 1.00th=[ 279], 5.00th=[ 288], 10.00th=[ 300], 20.00th=[ 372], 00:17:48.154 | 30.00th=[ 418], 40.00th=[ 451], 50.00th=[ 527], 60.00th=[ 600], 00:17:48.154 | 70.00th=[ 667], 80.00th=[ 2534], 90.00th=[ 2769], 95.00th=[ 6745], 00:17:48.154 | 99.00th=[ 6812], 99.50th=[ 6812], 99.90th=[ 6812], 99.95th=[ 6812], 00:17:48.154 | 99.99th=[ 6812] 00:17:48.154 bw ( KiB/s): min=10240, max=323584, per=8.82%, avg=193331.20, stdev=113858.49, samples=5 00:17:48.154 iops : min= 10, max= 316, avg=188.80, stdev=111.19, samples=5 00:17:48.154 lat (msec) : 250=0.17%, 500=47.17%, 750=22.67%, 1000=0.83%, >=2000=29.17% 00:17:48.154 cpu : usr=0.04%, sys=1.55%, ctx=911, majf=0, minf=32769 00:17:48.154 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.7%, 32=5.3%, >=64=89.5% 00:17:48.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.154 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:48.155 issued rwts: total=600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.155 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.155 job5: (groupid=0, jobs=1): err= 0: pid=3431263: Tue Oct 8 18:22:58 2024 00:17:48.155 read: IOPS=50, BW=50.4MiB/s (52.8MB/s)(532MiB/10565msec) 00:17:48.155 slat (usec): min=444, max=2114.1k, avg=19600.09, stdev=171010.56 00:17:48.155 clat (msec): min=134, max=4820, avg=1551.85, stdev=1761.24 00:17:48.155 lat (msec): min=433, max=4823, avg=1571.45, stdev=1768.65 00:17:48.155 clat percentiles (msec): 00:17:48.155 | 1.00th=[ 435], 5.00th=[ 447], 10.00th=[ 460], 20.00th=[ 485], 00:17:48.155 | 30.00th=[ 489], 40.00th=[ 493], 50.00th=[ 502], 60.00th=[ 527], 00:17:48.155 | 70.00th=[ 558], 80.00th=[ 4463], 90.00th=[ 4665], 95.00th=[ 4732], 00:17:48.155 | 99.00th=[ 4799], 99.50th=[ 4799], 99.90th=[ 4799], 99.95th=[ 4799], 00:17:48.155 | 99.99th=[ 4799] 00:17:48.155 bw ( KiB/s): min=12263, max=284672, per=7.54%, avg=165400.60, stdev=116555.02, samples=5 00:17:48.155 iops : min= 11, max= 278, avg=161.20, stdev=114.12, samples=5 00:17:48.155 lat (msec) : 250=0.19%, 500=47.56%, 750=24.81%, >=2000=27.44% 00:17:48.155 cpu : usr=0.03%, sys=1.01%, ctx=1860, majf=0, minf=32769 00:17:48.155 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.0%, 32=6.0%, >=64=88.2% 00:17:48.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.155 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:48.155 issued rwts: total=532,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.155 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.155 job5: (groupid=0, jobs=1): err= 0: pid=3431264: Tue Oct 8 18:22:58 2024 00:17:48.155 read: IOPS=91, BW=91.2MiB/s (95.7MB/s)(972MiB/10653msec) 00:17:48.155 slat (usec): min=53, max=2116.9k, avg=10815.92, stdev=111236.17 00:17:48.155 clat (msec): min=134, max=4816, avg=1340.14, stdev=1423.22 00:17:48.155 lat (msec): min=423, max=4820, avg=1350.95, stdev=1426.16 00:17:48.155 clat percentiles (msec): 00:17:48.155 | 1.00th=[ 422], 5.00th=[ 443], 10.00th=[ 468], 20.00th=[ 485], 00:17:48.155 | 30.00th=[ 502], 40.00th=[ 575], 50.00th=[ 584], 60.00th=[ 600], 00:17:48.155 | 70.00th=[ 667], 80.00th=[ 2601], 90.00th=[ 4530], 95.00th=[ 4665], 00:17:48.155 | 99.00th=[ 4799], 99.50th=[ 4799], 99.90th=[ 4799], 99.95th=[ 4799], 00:17:48.155 | 99.99th=[ 4799] 00:17:48.155 bw ( KiB/s): min=14336, max=286720, per=7.88%, avg=172826.20, stdev=87780.76, samples=10 00:17:48.155 iops : min= 14, max= 280, avg=168.70, stdev=85.77, samples=10 00:17:48.155 lat (msec) : 250=0.10%, 500=29.73%, 750=43.11%, 1000=0.21%, >=2000=26.85% 00:17:48.155 cpu : usr=0.04%, sys=2.02%, ctx=2241, majf=0, minf=32769 00:17:48.155 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.3%, >=64=93.5% 00:17:48.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.155 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:48.155 issued rwts: total=972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.155 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.155 job5: (groupid=0, jobs=1): err= 0: pid=3431265: Tue Oct 8 18:22:58 2024 00:17:48.155 read: IOPS=150, BW=151MiB/s (158MB/s)(1510MiB/10012msec) 00:17:48.155 slat (usec): min=42, max=2014.6k, avg=6620.52, stdev=100329.32 00:17:48.155 clat (msec): min=10, max=8592, avg=423.15, stdev=1378.00 00:17:48.155 lat (msec): min=11, max=8595, avg=429.77, stdev=1393.92 00:17:48.155 clat percentiles (msec): 00:17:48.155 | 1.00th=[ 23], 5.00th=[ 79], 10.00th=[ 118], 20.00th=[ 122], 00:17:48.155 | 30.00th=[ 122], 40.00th=[ 123], 50.00th=[ 123], 60.00th=[ 124], 00:17:48.155 | 70.00th=[ 124], 80.00th=[ 126], 90.00th=[ 205], 95.00th=[ 2265], 00:17:48.155 | 99.00th=[ 8423], 99.50th=[ 8557], 99.90th=[ 8658], 99.95th=[ 8658], 00:17:48.155 | 99.99th=[ 8658] 00:17:48.155 bw ( KiB/s): min=714132, max=1071104, per=40.72%, avg=892618.00, stdev=252417.32, samples=2 00:17:48.155 iops : min= 697, max= 1046, avg=871.50, stdev=246.78, samples=2 00:17:48.155 lat (msec) : 20=0.79%, 50=2.52%, 100=3.25%, 250=87.75%, 500=0.26% 00:17:48.155 lat (msec) : >=2000=5.43% 00:17:48.155 cpu : usr=0.03%, sys=2.05%, ctx=1400, majf=0, minf=32769 00:17:48.155 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.1%, >=64=95.8% 00:17:48.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.155 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:48.155 issued rwts: total=1510,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.155 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.155 job5: (groupid=0, jobs=1): err= 0: pid=3431266: Tue Oct 8 18:22:58 2024 00:17:48.155 read: IOPS=15, BW=16.0MiB/s (16.8MB/s)(171MiB/10689msec) 00:17:48.155 slat (usec): min=58, max=2035.1k, avg=61727.39, stdev=323859.53 00:17:48.155 clat (msec): min=132, max=8593, avg=4967.45, stdev=1466.51 00:17:48.155 lat (msec): min=2167, max=8602, avg=5029.18, stdev=1463.04 00:17:48.155 clat percentiles (msec): 00:17:48.155 | 1.00th=[ 2165], 5.00th=[ 2265], 10.00th=[ 4077], 20.00th=[ 4111], 00:17:48.155 | 30.00th=[ 4144], 40.00th=[ 4212], 50.00th=[ 4245], 60.00th=[ 4396], 00:17:48.155 | 70.00th=[ 6409], 80.00th=[ 6477], 90.00th=[ 6544], 95.00th=[ 6544], 00:17:48.155 | 99.00th=[ 8557], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:17:48.155 | 99.99th=[ 8658] 00:17:48.155 bw ( KiB/s): min= 2043, max=65405, per=1.34%, avg=29309.33, stdev=32590.70, samples=3 00:17:48.155 iops : min= 1, max= 63, avg=28.00, stdev=31.76, samples=3 00:17:48.155 lat (msec) : 250=0.58%, >=2000=99.42% 00:17:48.155 cpu : usr=0.01%, sys=1.28%, ctx=139, majf=0, minf=32769 00:17:48.155 IO depths : 1=0.6%, 2=1.2%, 4=2.3%, 8=4.7%, 16=9.4%, 32=18.7%, >=64=63.2% 00:17:48.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.155 complete : 0=0.0%, 4=97.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.2% 00:17:48.155 issued rwts: total=171,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.155 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.155 job5: (groupid=0, jobs=1): err= 0: pid=3431267: Tue Oct 8 18:22:58 2024 00:17:48.155 read: IOPS=171, BW=171MiB/s (180MB/s)(1715MiB/10017msec) 00:17:48.155 slat (usec): min=43, max=2054.8k, avg=5826.67, stdev=83056.11 00:17:48.155 clat (msec): min=15, max=4670, avg=494.31, stdev=847.47 00:17:48.155 lat (msec): min=17, max=4675, avg=500.14, stdev=855.58 00:17:48.155 clat percentiles (msec): 00:17:48.155 | 1.00th=[ 69], 5.00th=[ 122], 10.00th=[ 123], 20.00th=[ 124], 00:17:48.155 | 30.00th=[ 125], 40.00th=[ 140], 50.00th=[ 142], 60.00th=[ 222], 00:17:48.155 | 70.00th=[ 372], 80.00th=[ 489], 90.00th=[ 709], 95.00th=[ 2668], 00:17:48.155 | 99.00th=[ 4665], 99.50th=[ 4665], 99.90th=[ 4665], 99.95th=[ 4665], 00:17:48.155 | 99.99th=[ 4665] 00:17:48.155 bw ( KiB/s): min=30720, max=1030144, per=18.54%, avg=406528.00, stdev=338255.85, samples=8 00:17:48.155 iops : min= 30, max= 1006, avg=397.00, stdev=330.33, samples=8 00:17:48.155 lat (msec) : 20=0.17%, 50=0.70%, 100=0.93%, 250=60.06%, 500=18.37% 00:17:48.155 lat (msec) : 750=10.03%, 1000=0.23%, >=2000=9.50% 00:17:48.155 cpu : usr=0.06%, sys=2.50%, ctx=1552, majf=0, minf=32769 00:17:48.155 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.3% 00:17:48.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.155 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:48.155 issued rwts: total=1715,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.155 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.155 job5: (groupid=0, jobs=1): err= 0: pid=3431268: Tue Oct 8 18:22:58 2024 00:17:48.155 read: IOPS=83, BW=83.7MiB/s (87.8MB/s)(880MiB/10509msec) 00:17:48.155 slat (usec): min=44, max=2004.5k, avg=11886.07, stdev=130964.56 00:17:48.155 clat (msec): min=45, max=4440, avg=980.68, stdev=1365.53 00:17:48.155 lat (msec): min=130, max=4440, avg=992.56, stdev=1373.64 00:17:48.155 clat percentiles (msec): 00:17:48.155 | 1.00th=[ 130], 5.00th=[ 146], 10.00th=[ 146], 20.00th=[ 148], 00:17:48.155 | 30.00th=[ 165], 40.00th=[ 309], 50.00th=[ 456], 60.00th=[ 523], 00:17:48.155 | 70.00th=[ 558], 80.00th=[ 1972], 90.00th=[ 4329], 95.00th=[ 4396], 00:17:48.155 | 99.00th=[ 4463], 99.50th=[ 4463], 99.90th=[ 4463], 99.95th=[ 4463], 00:17:48.155 | 99.99th=[ 4463] 00:17:48.155 bw ( KiB/s): min=10240, max=587776, per=10.03%, avg=219852.29, stdev=194965.94, samples=7 00:17:48.155 iops : min= 10, max= 574, avg=214.43, stdev=190.51, samples=7 00:17:48.155 lat (msec) : 50=0.11%, 250=35.00%, 500=17.61%, 750=26.82%, 2000=0.57% 00:17:48.155 lat (msec) : >=2000=19.89% 00:17:48.155 cpu : usr=0.05%, sys=1.28%, ctx=1861, majf=0, minf=32769 00:17:48.155 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=92.8% 00:17:48.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.155 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:48.155 issued rwts: total=880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.155 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.155 job5: (groupid=0, jobs=1): err= 0: pid=3431269: Tue Oct 8 18:22:58 2024 00:17:48.155 read: IOPS=96, BW=96.3MiB/s (101MB/s)(966MiB/10036msec) 00:17:48.155 slat (usec): min=47, max=2045.0k, avg=10350.60, stdev=112123.68 00:17:48.155 clat (msec): min=33, max=6988, avg=1264.86, stdev=2060.26 00:17:48.155 lat (msec): min=39, max=6989, avg=1275.21, stdev=2068.49 00:17:48.156 clat percentiles (msec): 00:17:48.156 | 1.00th=[ 53], 5.00th=[ 142], 10.00th=[ 243], 20.00th=[ 264], 00:17:48.156 | 30.00th=[ 266], 40.00th=[ 275], 50.00th=[ 355], 60.00th=[ 435], 00:17:48.156 | 70.00th=[ 743], 80.00th=[ 835], 90.00th=[ 4933], 95.00th=[ 6879], 00:17:48.156 | 99.00th=[ 6946], 99.50th=[ 7013], 99.90th=[ 7013], 99.95th=[ 7013], 00:17:48.156 | 99.99th=[ 7013] 00:17:48.156 bw ( KiB/s): min=16384, max=475136, per=7.84%, avg=171827.20, stdev=174074.51, samples=10 00:17:48.156 iops : min= 16, max= 464, avg=167.80, stdev=169.99, samples=10 00:17:48.156 lat (msec) : 50=0.83%, 100=2.48%, 250=7.35%, 500=50.72%, 750=10.66% 00:17:48.156 lat (msec) : 1000=11.49%, >=2000=16.46% 00:17:48.156 cpu : usr=0.01%, sys=2.11%, ctx=905, majf=0, minf=32769 00:17:48.156 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.3%, >=64=93.5% 00:17:48.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.156 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:48.156 issued rwts: total=966,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.156 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.156 00:17:48.156 Run status group 0 (all jobs): 00:17:48.156 READ: bw=2141MiB/s (2245MB/s), 278KiB/s-183MiB/s (284kB/s-191MB/s), io=31.2GiB (33.5GB), run=10010-14919msec 00:17:48.156 00:17:48.156 Disk stats (read/write): 00:17:48.156 nvme0n1: ios=42526/0, merge=0/0, ticks=11451064/0, in_queue=11451064, util=98.39% 00:17:48.156 nvme1n1: ios=33882/0, merge=0/0, ticks=14298842/0, in_queue=14298842, util=99.02% 00:17:48.156 nvme2n1: ios=20611/0, merge=0/0, ticks=11381492/0, in_queue=11381492, util=98.98% 00:17:48.156 nvme3n1: ios=50170/0, merge=0/0, ticks=13993510/0, in_queue=13993510, util=98.99% 00:17:48.156 nvme4n1: ios=26952/0, merge=0/0, ticks=10591395/0, in_queue=10591395, util=98.71% 00:17:48.156 nvme5n1: ios=79758/0, merge=0/0, ticks=10277864/0, in_queue=10277864, util=99.09% 00:17:48.156 18:22:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:17:48.156 18:22:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:17:48.156 18:22:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:17:48.156 18:22:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:17:48.156 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.156 18:22:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:17:48.156 18:22:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:17:48.156 18:22:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:48.156 18:22:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000000 00:17:48.156 18:22:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000000 00:17:48.156 18:22:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:48.156 18:22:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:17:48.156 18:22:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:48.156 18:22:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.156 18:22:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:48.156 18:22:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.156 18:22:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:17:48.156 18:22:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:48.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:48.156 18:23:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:17:48.156 18:23:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:17:48.156 18:23:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:48.156 18:23:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000001 00:17:48.156 18:23:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000001 00:17:48.156 18:23:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:48.156 18:23:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:17:48.156 18:23:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:48.156 18:23:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.156 18:23:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:48.156 18:23:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.156 18:23:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:17:48.156 18:23:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:17:49.093 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:17:49.093 18:23:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:17:49.093 18:23:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:17:49.093 18:23:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:49.093 18:23:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000002 00:17:49.093 18:23:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000002 00:17:49.093 18:23:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:49.093 18:23:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:17:49.093 18:23:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:17:49.093 18:23:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.093 18:23:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:49.093 18:23:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.093 18:23:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:17:49.093 18:23:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:17:50.031 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:17:50.031 18:23:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:17:50.031 18:23:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:17:50.031 18:23:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:50.031 18:23:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000003 00:17:50.031 18:23:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:50.031 18:23:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000003 00:17:50.031 18:23:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:17:50.031 18:23:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:17:50.031 18:23:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.031 18:23:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:50.031 18:23:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.031 18:23:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:17:50.031 18:23:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:17:51.061 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:17:51.061 18:23:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:17:51.061 18:23:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:17:51.061 18:23:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:51.061 18:23:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000004 00:17:51.062 18:23:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000004 00:17:51.062 18:23:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:51.062 18:23:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:17:51.062 18:23:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:17:51.062 18:23:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.062 18:23:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:51.062 18:23:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.062 18:23:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:17:51.062 18:23:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:17:52.019 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:17:52.019 18:23:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:17:52.019 18:23:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:17:52.019 18:23:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:52.019 18:23:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000005 00:17:52.019 18:23:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:52.019 18:23:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000005 00:17:52.019 18:23:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:17:52.019 18:23:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:17:52.019 18:23:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.019 18:23:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:52.019 18:23:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.019 18:23:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:52.019 18:23:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:17:52.019 18:23:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:52.019 18:23:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # sync 00:17:52.019 18:23:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:52.019 18:23:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:52.019 18:23:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set +e 00:17:52.019 18:23:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:52.019 18:23:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:52.019 rmmod nvme_rdma 00:17:52.019 rmmod nvme_fabrics 00:17:52.019 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:52.019 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@128 -- # set -e 00:17:52.019 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@129 -- # return 0 00:17:52.019 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@515 -- # '[' -n 3429905 ']' 00:17:52.019 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@516 -- # killprocess 3429905 00:17:52.019 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@950 -- # '[' -z 3429905 ']' 00:17:52.019 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # kill -0 3429905 00:17:52.020 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@955 -- # uname 00:17:52.020 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:52.020 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3429905 00:17:52.020 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:52.020 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:52.020 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3429905' 00:17:52.020 killing process with pid 3429905 00:17:52.020 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@969 -- # kill 3429905 00:17:52.020 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@974 -- # wait 3429905 00:17:52.588 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:52.588 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:17:52.588 00:17:52.588 real 0m37.177s 00:17:52.588 user 2m3.035s 00:17:52.588 sys 0m16.220s 00:17:52.588 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:52.588 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:52.588 ************************************ 00:17:52.588 END TEST nvmf_srq_overwhelm 00:17:52.588 ************************************ 00:17:52.588 18:23:05 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:17:52.588 18:23:05 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:52.588 18:23:05 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:52.588 18:23:05 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:52.588 ************************************ 00:17:52.588 START TEST nvmf_shutdown 00:17:52.588 ************************************ 00:17:52.588 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:17:52.588 * Looking for test storage... 00:17:52.588 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:52.588 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:52.588 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:17:52.588 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:52.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.848 --rc genhtml_branch_coverage=1 00:17:52.848 --rc genhtml_function_coverage=1 00:17:52.848 --rc genhtml_legend=1 00:17:52.848 --rc geninfo_all_blocks=1 00:17:52.848 --rc geninfo_unexecuted_blocks=1 00:17:52.848 00:17:52.848 ' 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:52.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.848 --rc genhtml_branch_coverage=1 00:17:52.848 --rc genhtml_function_coverage=1 00:17:52.848 --rc genhtml_legend=1 00:17:52.848 --rc geninfo_all_blocks=1 00:17:52.848 --rc geninfo_unexecuted_blocks=1 00:17:52.848 00:17:52.848 ' 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:52.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.848 --rc genhtml_branch_coverage=1 00:17:52.848 --rc genhtml_function_coverage=1 00:17:52.848 --rc genhtml_legend=1 00:17:52.848 --rc geninfo_all_blocks=1 00:17:52.848 --rc geninfo_unexecuted_blocks=1 00:17:52.848 00:17:52.848 ' 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:52.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.848 --rc genhtml_branch_coverage=1 00:17:52.848 --rc genhtml_function_coverage=1 00:17:52.848 --rc genhtml_legend=1 00:17:52.848 --rc geninfo_all_blocks=1 00:17:52.848 --rc geninfo_unexecuted_blocks=1 00:17:52.848 00:17:52.848 ' 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:52.848 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.849 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.849 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.849 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:17:52.849 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.849 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:17:52.849 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:52.849 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:52.849 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:52.849 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:52.849 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:52.849 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:52.849 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:52.849 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:52.849 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:52.849 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:52.849 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:52.849 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:52.849 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:17:52.849 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:52.849 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:52.849 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:17:52.849 ************************************ 00:17:52.849 START TEST nvmf_shutdown_tc1 00:17:52.849 ************************************ 00:17:52.849 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:17:52.849 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:17:52.849 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:17:52.849 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:17:52.849 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:52.849 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:52.849 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:52.849 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:52.849 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.849 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:52.849 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.849 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:52.849 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:52.849 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:17:52.849 18:23:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:59.420 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:59.420 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:17:59.420 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:59.420 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:59.420 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:59.420 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:59.420 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:59.420 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:17:59.420 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:59.420 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:17:59.420 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:17:59.421 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:17:59.421 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:17:59.421 Found net devices under 0000:18:00.0: mlx_0_0 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:17:59.421 Found net devices under 0000:18:00.1: mlx_0_1 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # is_hw=yes 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # rdma_device_init 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # uname 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@528 -- # allocate_nic_ips 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:59.421 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:59.680 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:59.680 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:59.680 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:59.680 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:59.680 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:59.680 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:59.680 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:59.680 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:17:59.680 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:59.680 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:59.681 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:59.681 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:17:59.681 altname enp24s0f0np0 00:17:59.681 altname ens785f0np0 00:17:59.681 inet 192.168.100.8/24 scope global mlx_0_0 00:17:59.681 valid_lft forever preferred_lft forever 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:59.681 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:59.681 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:17:59.681 altname enp24s0f1np1 00:17:59.681 altname ens785f1np1 00:17:59.681 inet 192.168.100.9/24 scope global mlx_0_1 00:17:59.681 valid_lft forever preferred_lft forever 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # return 0 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:17:59.681 192.168.100.9' 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:17:59.681 192.168.100.9' 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # head -n 1 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:17:59.681 192.168.100.9' 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # tail -n +2 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # head -n 1 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:17:59.681 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:17:59.682 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:59.682 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:59.682 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:59.682 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # nvmfpid=3437027 00:17:59.682 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # waitforlisten 3437027 00:17:59.682 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:59.682 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 3437027 ']' 00:17:59.682 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.682 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:59.682 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.682 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:59.682 18:23:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:59.682 [2024-10-08 18:23:12.835728] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:17:59.682 [2024-10-08 18:23:12.835790] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:59.941 [2024-10-08 18:23:12.906303] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:59.941 [2024-10-08 18:23:12.993112] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:59.941 [2024-10-08 18:23:12.993151] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:59.942 [2024-10-08 18:23:12.993161] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:59.942 [2024-10-08 18:23:12.993169] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:59.942 [2024-10-08 18:23:12.993176] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:59.942 [2024-10-08 18:23:12.994617] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:17:59.942 [2024-10-08 18:23:12.994650] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:17:59.942 [2024-10-08 18:23:12.994683] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.942 [2024-10-08 18:23:12.994684] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:18:00.881 18:23:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:00.881 18:23:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:18:00.881 18:23:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:00.881 18:23:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:00.881 18:23:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:00.881 18:23:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:00.881 18:23:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:00.881 18:23:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.881 18:23:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:00.881 [2024-10-08 18:23:13.775996] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x161c5e0/0x1620ad0) succeed. 00:18:00.881 [2024-10-08 18:23:13.786626] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x161dc20/0x1662170) succeed. 00:18:00.881 18:23:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.881 18:23:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:18:00.881 18:23:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:18:00.881 18:23:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:00.881 18:23:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:00.881 18:23:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:00.881 18:23:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:00.881 18:23:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:18:00.881 18:23:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:00.881 18:23:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:18:00.881 18:23:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:00.881 18:23:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:18:00.881 18:23:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:00.881 18:23:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:18:00.881 18:23:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:00.881 18:23:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:18:00.881 18:23:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:00.881 18:23:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:18:00.881 18:23:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:00.881 18:23:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:18:00.881 18:23:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:00.881 18:23:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:18:00.881 18:23:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:00.881 18:23:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:18:00.881 18:23:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:00.881 18:23:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:18:00.881 18:23:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:18:00.881 18:23:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.881 18:23:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:00.881 Malloc1 00:18:00.881 [2024-10-08 18:23:14.025921] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:00.881 Malloc2 00:18:01.140 Malloc3 00:18:01.140 Malloc4 00:18:01.140 Malloc5 00:18:01.140 Malloc6 00:18:01.140 Malloc7 00:18:01.399 Malloc8 00:18:01.399 Malloc9 00:18:01.399 Malloc10 00:18:01.399 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.399 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:18:01.399 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:01.399 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:01.399 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3437267 00:18:01.399 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3437267 /var/tmp/bdevperf.sock 00:18:01.399 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 3437267 ']' 00:18:01.399 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:01.399 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:01.400 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:18:01.400 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:18:01.400 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:01.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:01.400 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:01.400 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:18:01.400 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:01.400 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:18:01.400 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:01.400 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:01.400 { 00:18:01.400 "params": { 00:18:01.400 "name": "Nvme$subsystem", 00:18:01.400 "trtype": "$TEST_TRANSPORT", 00:18:01.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:01.400 "adrfam": "ipv4", 00:18:01.400 "trsvcid": "$NVMF_PORT", 00:18:01.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:01.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:01.400 "hdgst": ${hdgst:-false}, 00:18:01.400 "ddgst": ${ddgst:-false} 00:18:01.400 }, 00:18:01.400 "method": "bdev_nvme_attach_controller" 00:18:01.400 } 00:18:01.400 EOF 00:18:01.400 )") 00:18:01.400 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:18:01.400 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:01.400 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:01.400 { 00:18:01.400 "params": { 00:18:01.400 "name": "Nvme$subsystem", 00:18:01.400 "trtype": "$TEST_TRANSPORT", 00:18:01.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:01.400 "adrfam": "ipv4", 00:18:01.400 "trsvcid": "$NVMF_PORT", 00:18:01.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:01.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:01.400 "hdgst": ${hdgst:-false}, 00:18:01.400 "ddgst": ${ddgst:-false} 00:18:01.400 }, 00:18:01.400 "method": "bdev_nvme_attach_controller" 00:18:01.400 } 00:18:01.400 EOF 00:18:01.400 )") 00:18:01.400 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:18:01.400 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:01.400 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:01.400 { 00:18:01.400 "params": { 00:18:01.400 "name": "Nvme$subsystem", 00:18:01.400 "trtype": "$TEST_TRANSPORT", 00:18:01.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:01.400 "adrfam": "ipv4", 00:18:01.400 "trsvcid": "$NVMF_PORT", 00:18:01.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:01.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:01.400 "hdgst": ${hdgst:-false}, 00:18:01.400 "ddgst": ${ddgst:-false} 00:18:01.400 }, 00:18:01.400 "method": "bdev_nvme_attach_controller" 00:18:01.400 } 00:18:01.400 EOF 00:18:01.400 )") 00:18:01.400 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:18:01.400 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:01.400 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:01.400 { 00:18:01.400 "params": { 00:18:01.400 "name": "Nvme$subsystem", 00:18:01.400 "trtype": "$TEST_TRANSPORT", 00:18:01.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:01.400 "adrfam": "ipv4", 00:18:01.400 "trsvcid": "$NVMF_PORT", 00:18:01.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:01.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:01.400 "hdgst": ${hdgst:-false}, 00:18:01.400 "ddgst": ${ddgst:-false} 00:18:01.400 }, 00:18:01.400 "method": "bdev_nvme_attach_controller" 00:18:01.400 } 00:18:01.400 EOF 00:18:01.400 )") 00:18:01.400 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:18:01.400 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:01.400 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:01.400 { 00:18:01.400 "params": { 00:18:01.400 "name": "Nvme$subsystem", 00:18:01.400 "trtype": "$TEST_TRANSPORT", 00:18:01.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:01.400 "adrfam": "ipv4", 00:18:01.400 "trsvcid": "$NVMF_PORT", 00:18:01.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:01.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:01.400 "hdgst": ${hdgst:-false}, 00:18:01.400 "ddgst": ${ddgst:-false} 00:18:01.400 }, 00:18:01.400 "method": "bdev_nvme_attach_controller" 00:18:01.400 } 00:18:01.400 EOF 00:18:01.400 )") 00:18:01.400 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:18:01.400 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:01.400 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:01.400 { 00:18:01.400 "params": { 00:18:01.400 "name": "Nvme$subsystem", 00:18:01.400 "trtype": "$TEST_TRANSPORT", 00:18:01.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:01.400 "adrfam": "ipv4", 00:18:01.400 "trsvcid": "$NVMF_PORT", 00:18:01.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:01.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:01.400 "hdgst": ${hdgst:-false}, 00:18:01.400 "ddgst": ${ddgst:-false} 00:18:01.400 }, 00:18:01.400 "method": "bdev_nvme_attach_controller" 00:18:01.400 } 00:18:01.400 EOF 00:18:01.400 )") 00:18:01.400 [2024-10-08 18:23:14.526609] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:18:01.400 [2024-10-08 18:23:14.526669] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:01.400 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:18:01.400 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:01.400 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:01.400 { 00:18:01.400 "params": { 00:18:01.400 "name": "Nvme$subsystem", 00:18:01.400 "trtype": "$TEST_TRANSPORT", 00:18:01.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:01.400 "adrfam": "ipv4", 00:18:01.400 "trsvcid": "$NVMF_PORT", 00:18:01.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:01.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:01.400 "hdgst": ${hdgst:-false}, 00:18:01.400 "ddgst": ${ddgst:-false} 00:18:01.400 }, 00:18:01.400 "method": "bdev_nvme_attach_controller" 00:18:01.400 } 00:18:01.400 EOF 00:18:01.400 )") 00:18:01.400 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:18:01.400 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:01.400 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:01.400 { 00:18:01.400 "params": { 00:18:01.400 "name": "Nvme$subsystem", 00:18:01.400 "trtype": "$TEST_TRANSPORT", 00:18:01.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:01.400 "adrfam": "ipv4", 00:18:01.400 "trsvcid": "$NVMF_PORT", 00:18:01.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:01.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:01.400 "hdgst": ${hdgst:-false}, 00:18:01.400 "ddgst": ${ddgst:-false} 00:18:01.400 }, 00:18:01.400 "method": "bdev_nvme_attach_controller" 00:18:01.400 } 00:18:01.400 EOF 00:18:01.400 )") 00:18:01.400 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:18:01.400 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:01.400 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:01.400 { 00:18:01.400 "params": { 00:18:01.400 "name": "Nvme$subsystem", 00:18:01.400 "trtype": "$TEST_TRANSPORT", 00:18:01.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:01.400 "adrfam": "ipv4", 00:18:01.400 "trsvcid": "$NVMF_PORT", 00:18:01.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:01.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:01.400 "hdgst": ${hdgst:-false}, 00:18:01.400 "ddgst": ${ddgst:-false} 00:18:01.400 }, 00:18:01.400 "method": "bdev_nvme_attach_controller" 00:18:01.400 } 00:18:01.400 EOF 00:18:01.400 )") 00:18:01.400 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:18:01.400 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:01.400 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:01.400 { 00:18:01.401 "params": { 00:18:01.401 "name": "Nvme$subsystem", 00:18:01.401 "trtype": "$TEST_TRANSPORT", 00:18:01.401 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:01.401 "adrfam": "ipv4", 00:18:01.401 "trsvcid": "$NVMF_PORT", 00:18:01.401 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:01.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:01.401 "hdgst": ${hdgst:-false}, 00:18:01.401 "ddgst": ${ddgst:-false} 00:18:01.401 }, 00:18:01.401 "method": "bdev_nvme_attach_controller" 00:18:01.401 } 00:18:01.401 EOF 00:18:01.401 )") 00:18:01.401 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:18:01.401 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:18:01.401 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:18:01.401 18:23:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:18:01.401 "params": { 00:18:01.401 "name": "Nvme1", 00:18:01.401 "trtype": "rdma", 00:18:01.401 "traddr": "192.168.100.8", 00:18:01.401 "adrfam": "ipv4", 00:18:01.401 "trsvcid": "4420", 00:18:01.401 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:01.401 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:01.401 "hdgst": false, 00:18:01.401 "ddgst": false 00:18:01.401 }, 00:18:01.401 "method": "bdev_nvme_attach_controller" 00:18:01.401 },{ 00:18:01.401 "params": { 00:18:01.401 "name": "Nvme2", 00:18:01.401 "trtype": "rdma", 00:18:01.401 "traddr": "192.168.100.8", 00:18:01.401 "adrfam": "ipv4", 00:18:01.401 "trsvcid": "4420", 00:18:01.401 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:01.401 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:01.401 "hdgst": false, 00:18:01.401 "ddgst": false 00:18:01.401 }, 00:18:01.401 "method": "bdev_nvme_attach_controller" 00:18:01.401 },{ 00:18:01.401 "params": { 00:18:01.401 "name": "Nvme3", 00:18:01.401 "trtype": "rdma", 00:18:01.401 "traddr": "192.168.100.8", 00:18:01.401 "adrfam": "ipv4", 00:18:01.401 "trsvcid": "4420", 00:18:01.401 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:18:01.401 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:18:01.401 "hdgst": false, 00:18:01.401 "ddgst": false 00:18:01.401 }, 00:18:01.401 "method": "bdev_nvme_attach_controller" 00:18:01.401 },{ 00:18:01.401 "params": { 00:18:01.401 "name": "Nvme4", 00:18:01.401 "trtype": "rdma", 00:18:01.401 "traddr": "192.168.100.8", 00:18:01.401 "adrfam": "ipv4", 00:18:01.401 "trsvcid": "4420", 00:18:01.401 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:18:01.401 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:18:01.401 "hdgst": false, 00:18:01.401 "ddgst": false 00:18:01.401 }, 00:18:01.401 "method": "bdev_nvme_attach_controller" 00:18:01.401 },{ 00:18:01.401 "params": { 00:18:01.401 "name": "Nvme5", 00:18:01.401 "trtype": "rdma", 00:18:01.401 "traddr": "192.168.100.8", 00:18:01.401 "adrfam": "ipv4", 00:18:01.401 "trsvcid": "4420", 00:18:01.401 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:18:01.401 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:18:01.401 "hdgst": false, 00:18:01.401 "ddgst": false 00:18:01.401 }, 00:18:01.401 "method": "bdev_nvme_attach_controller" 00:18:01.401 },{ 00:18:01.401 "params": { 00:18:01.401 "name": "Nvme6", 00:18:01.401 "trtype": "rdma", 00:18:01.401 "traddr": "192.168.100.8", 00:18:01.401 "adrfam": "ipv4", 00:18:01.401 "trsvcid": "4420", 00:18:01.401 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:18:01.401 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:18:01.401 "hdgst": false, 00:18:01.401 "ddgst": false 00:18:01.401 }, 00:18:01.401 "method": "bdev_nvme_attach_controller" 00:18:01.401 },{ 00:18:01.401 "params": { 00:18:01.401 "name": "Nvme7", 00:18:01.401 "trtype": "rdma", 00:18:01.401 "traddr": "192.168.100.8", 00:18:01.401 "adrfam": "ipv4", 00:18:01.401 "trsvcid": "4420", 00:18:01.401 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:18:01.401 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:18:01.401 "hdgst": false, 00:18:01.401 "ddgst": false 00:18:01.401 }, 00:18:01.401 "method": "bdev_nvme_attach_controller" 00:18:01.401 },{ 00:18:01.401 "params": { 00:18:01.401 "name": "Nvme8", 00:18:01.401 "trtype": "rdma", 00:18:01.401 "traddr": "192.168.100.8", 00:18:01.401 "adrfam": "ipv4", 00:18:01.401 "trsvcid": "4420", 00:18:01.401 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:18:01.401 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:18:01.401 "hdgst": false, 00:18:01.401 "ddgst": false 00:18:01.401 }, 00:18:01.401 "method": "bdev_nvme_attach_controller" 00:18:01.401 },{ 00:18:01.401 "params": { 00:18:01.401 "name": "Nvme9", 00:18:01.401 "trtype": "rdma", 00:18:01.401 "traddr": "192.168.100.8", 00:18:01.401 "adrfam": "ipv4", 00:18:01.401 "trsvcid": "4420", 00:18:01.401 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:18:01.401 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:18:01.401 "hdgst": false, 00:18:01.401 "ddgst": false 00:18:01.401 }, 00:18:01.401 "method": "bdev_nvme_attach_controller" 00:18:01.401 },{ 00:18:01.401 "params": { 00:18:01.401 "name": "Nvme10", 00:18:01.401 "trtype": "rdma", 00:18:01.401 "traddr": "192.168.100.8", 00:18:01.401 "adrfam": "ipv4", 00:18:01.401 "trsvcid": "4420", 00:18:01.401 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:18:01.401 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:18:01.401 "hdgst": false, 00:18:01.401 "ddgst": false 00:18:01.401 }, 00:18:01.401 "method": "bdev_nvme_attach_controller" 00:18:01.401 }' 00:18:01.660 [2024-10-08 18:23:14.615831] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.660 [2024-10-08 18:23:14.697531] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.599 18:23:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:02.599 18:23:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:18:02.599 18:23:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:18:02.599 18:23:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.599 18:23:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:02.599 18:23:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.599 18:23:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3437267 00:18:02.599 18:23:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:18:02.599 18:23:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:18:03.537 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3437267 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:18:03.537 18:23:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3437027 00:18:03.537 18:23:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:18:03.537 18:23:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:18:03.537 18:23:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:18:03.537 18:23:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:18:03.537 18:23:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:03.537 18:23:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:03.537 { 00:18:03.537 "params": { 00:18:03.537 "name": "Nvme$subsystem", 00:18:03.537 "trtype": "$TEST_TRANSPORT", 00:18:03.537 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:03.537 "adrfam": "ipv4", 00:18:03.537 "trsvcid": "$NVMF_PORT", 00:18:03.537 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:03.537 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:03.537 "hdgst": ${hdgst:-false}, 00:18:03.537 "ddgst": ${ddgst:-false} 00:18:03.537 }, 00:18:03.537 "method": "bdev_nvme_attach_controller" 00:18:03.537 } 00:18:03.537 EOF 00:18:03.537 )") 00:18:03.537 18:23:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:18:03.537 18:23:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:03.537 18:23:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:03.537 { 00:18:03.537 "params": { 00:18:03.537 "name": "Nvme$subsystem", 00:18:03.537 "trtype": "$TEST_TRANSPORT", 00:18:03.537 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:03.537 "adrfam": "ipv4", 00:18:03.537 "trsvcid": "$NVMF_PORT", 00:18:03.537 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:03.537 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:03.537 "hdgst": ${hdgst:-false}, 00:18:03.537 "ddgst": ${ddgst:-false} 00:18:03.537 }, 00:18:03.537 "method": "bdev_nvme_attach_controller" 00:18:03.537 } 00:18:03.537 EOF 00:18:03.537 )") 00:18:03.537 18:23:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:18:03.537 18:23:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:03.538 18:23:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:03.538 { 00:18:03.538 "params": { 00:18:03.538 "name": "Nvme$subsystem", 00:18:03.538 "trtype": "$TEST_TRANSPORT", 00:18:03.538 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:03.538 "adrfam": "ipv4", 00:18:03.538 "trsvcid": "$NVMF_PORT", 00:18:03.538 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:03.538 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:03.538 "hdgst": ${hdgst:-false}, 00:18:03.538 "ddgst": ${ddgst:-false} 00:18:03.538 }, 00:18:03.538 "method": "bdev_nvme_attach_controller" 00:18:03.538 } 00:18:03.538 EOF 00:18:03.538 )") 00:18:03.538 18:23:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:18:03.538 18:23:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:03.538 18:23:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:03.538 { 00:18:03.538 "params": { 00:18:03.538 "name": "Nvme$subsystem", 00:18:03.538 "trtype": "$TEST_TRANSPORT", 00:18:03.538 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:03.538 "adrfam": "ipv4", 00:18:03.538 "trsvcid": "$NVMF_PORT", 00:18:03.538 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:03.538 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:03.538 "hdgst": ${hdgst:-false}, 00:18:03.538 "ddgst": ${ddgst:-false} 00:18:03.538 }, 00:18:03.538 "method": "bdev_nvme_attach_controller" 00:18:03.538 } 00:18:03.538 EOF 00:18:03.538 )") 00:18:03.538 18:23:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:18:03.538 18:23:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:03.538 18:23:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:03.538 { 00:18:03.538 "params": { 00:18:03.538 "name": "Nvme$subsystem", 00:18:03.538 "trtype": "$TEST_TRANSPORT", 00:18:03.538 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:03.538 "adrfam": "ipv4", 00:18:03.538 "trsvcid": "$NVMF_PORT", 00:18:03.538 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:03.538 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:03.538 "hdgst": ${hdgst:-false}, 00:18:03.538 "ddgst": ${ddgst:-false} 00:18:03.538 }, 00:18:03.538 "method": "bdev_nvme_attach_controller" 00:18:03.538 } 00:18:03.538 EOF 00:18:03.538 )") 00:18:03.538 18:23:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:18:03.538 18:23:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:03.538 18:23:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:03.538 { 00:18:03.538 "params": { 00:18:03.538 "name": "Nvme$subsystem", 00:18:03.538 "trtype": "$TEST_TRANSPORT", 00:18:03.538 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:03.538 "adrfam": "ipv4", 00:18:03.538 "trsvcid": "$NVMF_PORT", 00:18:03.538 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:03.538 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:03.538 "hdgst": ${hdgst:-false}, 00:18:03.538 "ddgst": ${ddgst:-false} 00:18:03.538 }, 00:18:03.538 "method": "bdev_nvme_attach_controller" 00:18:03.538 } 00:18:03.538 EOF 00:18:03.538 )") 00:18:03.538 [2024-10-08 18:23:16.632862] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:18:03.538 [2024-10-08 18:23:16.632927] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3437651 ] 00:18:03.538 18:23:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:18:03.538 18:23:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:03.538 18:23:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:03.538 { 00:18:03.538 "params": { 00:18:03.538 "name": "Nvme$subsystem", 00:18:03.538 "trtype": "$TEST_TRANSPORT", 00:18:03.538 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:03.538 "adrfam": "ipv4", 00:18:03.538 "trsvcid": "$NVMF_PORT", 00:18:03.538 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:03.538 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:03.538 "hdgst": ${hdgst:-false}, 00:18:03.538 "ddgst": ${ddgst:-false} 00:18:03.538 }, 00:18:03.538 "method": "bdev_nvme_attach_controller" 00:18:03.538 } 00:18:03.538 EOF 00:18:03.538 )") 00:18:03.538 18:23:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:18:03.538 18:23:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:03.538 18:23:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:03.538 { 00:18:03.538 "params": { 00:18:03.538 "name": "Nvme$subsystem", 00:18:03.538 "trtype": "$TEST_TRANSPORT", 00:18:03.538 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:03.538 "adrfam": "ipv4", 00:18:03.538 "trsvcid": "$NVMF_PORT", 00:18:03.538 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:03.538 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:03.538 "hdgst": ${hdgst:-false}, 00:18:03.538 "ddgst": ${ddgst:-false} 00:18:03.538 }, 00:18:03.538 "method": "bdev_nvme_attach_controller" 00:18:03.538 } 00:18:03.538 EOF 00:18:03.538 )") 00:18:03.538 18:23:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:18:03.538 18:23:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:03.538 18:23:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:03.538 { 00:18:03.538 "params": { 00:18:03.538 "name": "Nvme$subsystem", 00:18:03.538 "trtype": "$TEST_TRANSPORT", 00:18:03.538 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:03.538 "adrfam": "ipv4", 00:18:03.538 "trsvcid": "$NVMF_PORT", 00:18:03.538 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:03.538 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:03.538 "hdgst": ${hdgst:-false}, 00:18:03.538 "ddgst": ${ddgst:-false} 00:18:03.538 }, 00:18:03.538 "method": "bdev_nvme_attach_controller" 00:18:03.538 } 00:18:03.538 EOF 00:18:03.538 )") 00:18:03.538 18:23:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:18:03.538 18:23:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:03.538 18:23:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:03.538 { 00:18:03.538 "params": { 00:18:03.538 "name": "Nvme$subsystem", 00:18:03.538 "trtype": "$TEST_TRANSPORT", 00:18:03.538 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:03.538 "adrfam": "ipv4", 00:18:03.538 "trsvcid": "$NVMF_PORT", 00:18:03.538 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:03.538 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:03.538 "hdgst": ${hdgst:-false}, 00:18:03.538 "ddgst": ${ddgst:-false} 00:18:03.538 }, 00:18:03.538 "method": "bdev_nvme_attach_controller" 00:18:03.538 } 00:18:03.538 EOF 00:18:03.538 )") 00:18:03.538 18:23:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:18:03.538 18:23:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:18:03.538 18:23:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:18:03.538 18:23:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:18:03.538 "params": { 00:18:03.538 "name": "Nvme1", 00:18:03.538 "trtype": "rdma", 00:18:03.538 "traddr": "192.168.100.8", 00:18:03.538 "adrfam": "ipv4", 00:18:03.538 "trsvcid": "4420", 00:18:03.538 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.538 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:03.538 "hdgst": false, 00:18:03.538 "ddgst": false 00:18:03.538 }, 00:18:03.538 "method": "bdev_nvme_attach_controller" 00:18:03.538 },{ 00:18:03.538 "params": { 00:18:03.538 "name": "Nvme2", 00:18:03.538 "trtype": "rdma", 00:18:03.538 "traddr": "192.168.100.8", 00:18:03.538 "adrfam": "ipv4", 00:18:03.538 "trsvcid": "4420", 00:18:03.538 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:03.538 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:03.538 "hdgst": false, 00:18:03.538 "ddgst": false 00:18:03.538 }, 00:18:03.538 "method": "bdev_nvme_attach_controller" 00:18:03.538 },{ 00:18:03.538 "params": { 00:18:03.538 "name": "Nvme3", 00:18:03.538 "trtype": "rdma", 00:18:03.538 "traddr": "192.168.100.8", 00:18:03.538 "adrfam": "ipv4", 00:18:03.538 "trsvcid": "4420", 00:18:03.538 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:18:03.538 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:18:03.538 "hdgst": false, 00:18:03.538 "ddgst": false 00:18:03.538 }, 00:18:03.538 "method": "bdev_nvme_attach_controller" 00:18:03.538 },{ 00:18:03.538 "params": { 00:18:03.538 "name": "Nvme4", 00:18:03.538 "trtype": "rdma", 00:18:03.538 "traddr": "192.168.100.8", 00:18:03.538 "adrfam": "ipv4", 00:18:03.538 "trsvcid": "4420", 00:18:03.538 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:18:03.538 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:18:03.539 "hdgst": false, 00:18:03.539 "ddgst": false 00:18:03.539 }, 00:18:03.539 "method": "bdev_nvme_attach_controller" 00:18:03.539 },{ 00:18:03.539 "params": { 00:18:03.539 "name": "Nvme5", 00:18:03.539 "trtype": "rdma", 00:18:03.539 "traddr": "192.168.100.8", 00:18:03.539 "adrfam": "ipv4", 00:18:03.539 "trsvcid": "4420", 00:18:03.539 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:18:03.539 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:18:03.539 "hdgst": false, 00:18:03.539 "ddgst": false 00:18:03.539 }, 00:18:03.539 "method": "bdev_nvme_attach_controller" 00:18:03.539 },{ 00:18:03.539 "params": { 00:18:03.539 "name": "Nvme6", 00:18:03.539 "trtype": "rdma", 00:18:03.539 "traddr": "192.168.100.8", 00:18:03.539 "adrfam": "ipv4", 00:18:03.539 "trsvcid": "4420", 00:18:03.539 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:18:03.539 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:18:03.539 "hdgst": false, 00:18:03.539 "ddgst": false 00:18:03.539 }, 00:18:03.539 "method": "bdev_nvme_attach_controller" 00:18:03.539 },{ 00:18:03.539 "params": { 00:18:03.539 "name": "Nvme7", 00:18:03.539 "trtype": "rdma", 00:18:03.539 "traddr": "192.168.100.8", 00:18:03.539 "adrfam": "ipv4", 00:18:03.539 "trsvcid": "4420", 00:18:03.539 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:18:03.539 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:18:03.539 "hdgst": false, 00:18:03.539 "ddgst": false 00:18:03.539 }, 00:18:03.539 "method": "bdev_nvme_attach_controller" 00:18:03.539 },{ 00:18:03.539 "params": { 00:18:03.539 "name": "Nvme8", 00:18:03.539 "trtype": "rdma", 00:18:03.539 "traddr": "192.168.100.8", 00:18:03.539 "adrfam": "ipv4", 00:18:03.539 "trsvcid": "4420", 00:18:03.539 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:18:03.539 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:18:03.539 "hdgst": false, 00:18:03.539 "ddgst": false 00:18:03.539 }, 00:18:03.539 "method": "bdev_nvme_attach_controller" 00:18:03.539 },{ 00:18:03.539 "params": { 00:18:03.539 "name": "Nvme9", 00:18:03.539 "trtype": "rdma", 00:18:03.539 "traddr": "192.168.100.8", 00:18:03.539 "adrfam": "ipv4", 00:18:03.539 "trsvcid": "4420", 00:18:03.539 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:18:03.539 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:18:03.539 "hdgst": false, 00:18:03.539 "ddgst": false 00:18:03.539 }, 00:18:03.539 "method": "bdev_nvme_attach_controller" 00:18:03.539 },{ 00:18:03.539 "params": { 00:18:03.539 "name": "Nvme10", 00:18:03.539 "trtype": "rdma", 00:18:03.539 "traddr": "192.168.100.8", 00:18:03.539 "adrfam": "ipv4", 00:18:03.539 "trsvcid": "4420", 00:18:03.539 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:18:03.539 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:18:03.539 "hdgst": false, 00:18:03.539 "ddgst": false 00:18:03.539 }, 00:18:03.539 "method": "bdev_nvme_attach_controller" 00:18:03.539 }' 00:18:03.798 [2024-10-08 18:23:16.723098] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.798 [2024-10-08 18:23:16.806505] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.865 Running I/O for 1 seconds... 00:18:05.801 3324.00 IOPS, 207.75 MiB/s 00:18:05.801 Latency(us) 00:18:05.801 [2024-10-08T16:23:18.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.801 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:05.801 Verification LBA range: start 0x0 length 0x400 00:18:05.801 Nvme1n1 : 1.19 371.51 23.22 0.00 0.00 169920.70 7921.31 222480.47 00:18:05.801 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:05.801 Verification LBA range: start 0x0 length 0x400 00:18:05.801 Nvme2n1 : 1.19 376.83 23.55 0.00 0.00 165399.15 8662.15 162301.33 00:18:05.801 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:05.801 Verification LBA range: start 0x0 length 0x400 00:18:05.801 Nvme3n1 : 1.19 378.96 23.68 0.00 0.00 161858.98 4587.52 155918.69 00:18:05.801 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:05.801 Verification LBA range: start 0x0 length 0x400 00:18:05.801 Nvme4n1 : 1.19 381.90 23.87 0.00 0.00 158558.07 5328.36 147712.45 00:18:05.801 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:05.801 Verification LBA range: start 0x0 length 0x400 00:18:05.801 Nvme5n1 : 1.19 375.48 23.47 0.00 0.00 159640.71 10485.76 134035.37 00:18:05.801 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:05.801 Verification LBA range: start 0x0 length 0x400 00:18:05.801 Nvme6n1 : 1.19 375.06 23.44 0.00 0.00 157190.42 11169.61 124005.51 00:18:05.801 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:05.801 Verification LBA range: start 0x0 length 0x400 00:18:05.801 Nvme7n1 : 1.20 374.74 23.42 0.00 0.00 154588.64 11226.60 119446.48 00:18:05.801 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:05.801 Verification LBA range: start 0x0 length 0x400 00:18:05.801 Nvme8n1 : 1.20 374.35 23.40 0.00 0.00 152992.15 11340.58 110784.33 00:18:05.801 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:05.801 Verification LBA range: start 0x0 length 0x400 00:18:05.801 Nvme9n1 : 1.20 373.85 23.37 0.00 0.00 151610.90 12081.42 113519.75 00:18:05.801 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:05.801 Verification LBA range: start 0x0 length 0x400 00:18:05.801 Nvme10n1 : 1.20 319.34 19.96 0.00 0.00 174708.76 2635.69 232510.33 00:18:05.801 [2024-10-08T16:23:18.974Z] =================================================================================================================== 00:18:05.801 [2024-10-08T16:23:18.974Z] Total : 3702.01 231.38 0.00 0.00 160426.01 2635.69 232510.33 00:18:06.060 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:18:06.060 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:18:06.060 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:18:06.320 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:06.320 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:18:06.320 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:06.320 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:18:06.320 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:06.320 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:06.320 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:18:06.320 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:06.320 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:06.320 rmmod nvme_rdma 00:18:06.320 rmmod nvme_fabrics 00:18:06.320 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:06.320 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:18:06.320 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:18:06.320 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@515 -- # '[' -n 3437027 ']' 00:18:06.320 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # killprocess 3437027 00:18:06.320 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 3437027 ']' 00:18:06.320 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 3437027 00:18:06.320 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:18:06.320 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:06.320 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3437027 00:18:06.320 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:06.320 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:06.320 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3437027' 00:18:06.320 killing process with pid 3437027 00:18:06.320 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 3437027 00:18:06.320 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 3437027 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:18:06.892 00:18:06.892 real 0m14.021s 00:18:06.892 user 0m32.319s 00:18:06.892 sys 0m6.534s 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:06.892 ************************************ 00:18:06.892 END TEST nvmf_shutdown_tc1 00:18:06.892 ************************************ 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:18:06.892 ************************************ 00:18:06.892 START TEST nvmf_shutdown_tc2 00:18:06.892 ************************************ 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:18:06.892 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:18:06.892 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:06.892 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:06.893 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:18:06.893 Found net devices under 0000:18:00.0: mlx_0_0 00:18:06.893 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:06.893 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:06.893 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:06.893 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:18:06.893 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:06.893 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:06.893 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:18:06.893 Found net devices under 0000:18:00.1: mlx_0_1 00:18:06.893 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:06.893 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:06.893 18:23:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # is_hw=yes 00:18:06.893 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:06.893 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:18:06.893 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:18:06.893 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # rdma_device_init 00:18:06.893 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:18:06.893 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # uname 00:18:06.893 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:06.893 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:06.893 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:06.893 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:06.893 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:06.893 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:06.893 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:06.893 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:06.893 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@528 -- # allocate_nic_ips 00:18:06.893 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:07.153 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:07.153 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:07.153 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:07.153 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:07.153 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:07.154 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:07.154 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:18:07.154 altname enp24s0f0np0 00:18:07.154 altname ens785f0np0 00:18:07.154 inet 192.168.100.8/24 scope global mlx_0_0 00:18:07.154 valid_lft forever preferred_lft forever 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:07.154 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:07.154 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:18:07.154 altname enp24s0f1np1 00:18:07.154 altname ens785f1np1 00:18:07.154 inet 192.168.100.9/24 scope global mlx_0_1 00:18:07.154 valid_lft forever preferred_lft forever 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # return 0 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:18:07.154 192.168.100.9' 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:18:07.154 192.168.100.9' 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # head -n 1 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:18:07.154 192.168.100.9' 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # tail -n +2 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # head -n 1 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # nvmfpid=3438127 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # waitforlisten 3438127 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3438127 ']' 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:07.154 18:23:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:07.154 [2024-10-08 18:23:20.306332] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:18:07.155 [2024-10-08 18:23:20.306397] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:07.414 [2024-10-08 18:23:20.398227] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:07.414 [2024-10-08 18:23:20.489905] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:07.414 [2024-10-08 18:23:20.489947] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:07.414 [2024-10-08 18:23:20.489957] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:07.414 [2024-10-08 18:23:20.489966] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:07.414 [2024-10-08 18:23:20.489974] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:07.414 [2024-10-08 18:23:20.491417] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:18:07.414 [2024-10-08 18:23:20.491455] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:18:07.414 [2024-10-08 18:23:20.491554] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:07.414 [2024-10-08 18:23:20.491556] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:18:08.353 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:08.353 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:18:08.353 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:08.353 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:08.353 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:08.353 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:08.353 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:08.353 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.353 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:08.353 [2024-10-08 18:23:21.239992] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa665e0/0xa6aad0) succeed. 00:18:08.353 [2024-10-08 18:23:21.250643] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa67c20/0xaac170) succeed. 00:18:08.353 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.353 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:18:08.353 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:18:08.353 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:08.354 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:08.354 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:08.354 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:08.354 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:18:08.354 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:08.354 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:18:08.354 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:08.354 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:18:08.354 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:08.354 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:18:08.354 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:08.354 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:18:08.354 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:08.354 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:18:08.354 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:08.354 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:18:08.354 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:08.354 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:18:08.354 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:08.354 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:18:08.354 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:08.354 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:18:08.354 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:18:08.354 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.354 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:08.354 Malloc1 00:18:08.354 [2024-10-08 18:23:21.479556] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:08.354 Malloc2 00:18:08.614 Malloc3 00:18:08.614 Malloc4 00:18:08.614 Malloc5 00:18:08.614 Malloc6 00:18:08.614 Malloc7 00:18:08.874 Malloc8 00:18:08.874 Malloc9 00:18:08.874 Malloc10 00:18:08.874 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.874 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:18:08.874 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:08.874 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:08.874 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3438453 00:18:08.874 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3438453 /var/tmp/bdevperf.sock 00:18:08.874 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3438453 ']' 00:18:08.874 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:08.874 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:08.874 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:18:08.874 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:18:08.874 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:08.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:08.874 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:08.874 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config=() 00:18:08.874 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:08.874 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # local subsystem config 00:18:08.874 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:08.874 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:08.874 { 00:18:08.874 "params": { 00:18:08.874 "name": "Nvme$subsystem", 00:18:08.874 "trtype": "$TEST_TRANSPORT", 00:18:08.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:08.874 "adrfam": "ipv4", 00:18:08.874 "trsvcid": "$NVMF_PORT", 00:18:08.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:08.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:08.874 "hdgst": ${hdgst:-false}, 00:18:08.874 "ddgst": ${ddgst:-false} 00:18:08.874 }, 00:18:08.874 "method": "bdev_nvme_attach_controller" 00:18:08.874 } 00:18:08.874 EOF 00:18:08.874 )") 00:18:08.874 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:18:08.874 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:08.874 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:08.874 { 00:18:08.874 "params": { 00:18:08.874 "name": "Nvme$subsystem", 00:18:08.874 "trtype": "$TEST_TRANSPORT", 00:18:08.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:08.874 "adrfam": "ipv4", 00:18:08.874 "trsvcid": "$NVMF_PORT", 00:18:08.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:08.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:08.874 "hdgst": ${hdgst:-false}, 00:18:08.874 "ddgst": ${ddgst:-false} 00:18:08.874 }, 00:18:08.874 "method": "bdev_nvme_attach_controller" 00:18:08.874 } 00:18:08.874 EOF 00:18:08.874 )") 00:18:08.874 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:18:08.874 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:08.874 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:08.874 { 00:18:08.874 "params": { 00:18:08.874 "name": "Nvme$subsystem", 00:18:08.874 "trtype": "$TEST_TRANSPORT", 00:18:08.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:08.874 "adrfam": "ipv4", 00:18:08.874 "trsvcid": "$NVMF_PORT", 00:18:08.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:08.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:08.874 "hdgst": ${hdgst:-false}, 00:18:08.874 "ddgst": ${ddgst:-false} 00:18:08.874 }, 00:18:08.874 "method": "bdev_nvme_attach_controller" 00:18:08.874 } 00:18:08.874 EOF 00:18:08.874 )") 00:18:08.874 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:18:08.874 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:08.874 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:08.874 { 00:18:08.874 "params": { 00:18:08.874 "name": "Nvme$subsystem", 00:18:08.874 "trtype": "$TEST_TRANSPORT", 00:18:08.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:08.874 "adrfam": "ipv4", 00:18:08.874 "trsvcid": "$NVMF_PORT", 00:18:08.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:08.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:08.874 "hdgst": ${hdgst:-false}, 00:18:08.874 "ddgst": ${ddgst:-false} 00:18:08.874 }, 00:18:08.874 "method": "bdev_nvme_attach_controller" 00:18:08.874 } 00:18:08.874 EOF 00:18:08.874 )") 00:18:08.874 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:18:08.874 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:08.874 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:08.874 { 00:18:08.874 "params": { 00:18:08.874 "name": "Nvme$subsystem", 00:18:08.874 "trtype": "$TEST_TRANSPORT", 00:18:08.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:08.874 "adrfam": "ipv4", 00:18:08.874 "trsvcid": "$NVMF_PORT", 00:18:08.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:08.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:08.874 "hdgst": ${hdgst:-false}, 00:18:08.874 "ddgst": ${ddgst:-false} 00:18:08.874 }, 00:18:08.874 "method": "bdev_nvme_attach_controller" 00:18:08.874 } 00:18:08.874 EOF 00:18:08.874 )") 00:18:08.874 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:18:08.874 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:08.874 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:08.874 { 00:18:08.874 "params": { 00:18:08.874 "name": "Nvme$subsystem", 00:18:08.874 "trtype": "$TEST_TRANSPORT", 00:18:08.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:08.874 "adrfam": "ipv4", 00:18:08.874 "trsvcid": "$NVMF_PORT", 00:18:08.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:08.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:08.874 "hdgst": ${hdgst:-false}, 00:18:08.874 "ddgst": ${ddgst:-false} 00:18:08.874 }, 00:18:08.874 "method": "bdev_nvme_attach_controller" 00:18:08.874 } 00:18:08.874 EOF 00:18:08.874 )") 00:18:08.874 18:23:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:18:08.874 [2024-10-08 18:23:21.999883] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:18:08.874 [2024-10-08 18:23:21.999949] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3438453 ] 00:18:08.874 18:23:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:08.874 18:23:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:08.874 { 00:18:08.874 "params": { 00:18:08.874 "name": "Nvme$subsystem", 00:18:08.874 "trtype": "$TEST_TRANSPORT", 00:18:08.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:08.874 "adrfam": "ipv4", 00:18:08.874 "trsvcid": "$NVMF_PORT", 00:18:08.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:08.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:08.874 "hdgst": ${hdgst:-false}, 00:18:08.874 "ddgst": ${ddgst:-false} 00:18:08.874 }, 00:18:08.874 "method": "bdev_nvme_attach_controller" 00:18:08.874 } 00:18:08.874 EOF 00:18:08.874 )") 00:18:08.874 18:23:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:18:08.874 18:23:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:08.874 18:23:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:08.874 { 00:18:08.874 "params": { 00:18:08.874 "name": "Nvme$subsystem", 00:18:08.874 "trtype": "$TEST_TRANSPORT", 00:18:08.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:08.874 "adrfam": "ipv4", 00:18:08.874 "trsvcid": "$NVMF_PORT", 00:18:08.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:08.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:08.874 "hdgst": ${hdgst:-false}, 00:18:08.874 "ddgst": ${ddgst:-false} 00:18:08.874 }, 00:18:08.874 "method": "bdev_nvme_attach_controller" 00:18:08.874 } 00:18:08.874 EOF 00:18:08.874 )") 00:18:08.874 18:23:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:18:08.874 18:23:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:08.874 18:23:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:08.874 { 00:18:08.874 "params": { 00:18:08.874 "name": "Nvme$subsystem", 00:18:08.875 "trtype": "$TEST_TRANSPORT", 00:18:08.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:08.875 "adrfam": "ipv4", 00:18:08.875 "trsvcid": "$NVMF_PORT", 00:18:08.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:08.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:08.875 "hdgst": ${hdgst:-false}, 00:18:08.875 "ddgst": ${ddgst:-false} 00:18:08.875 }, 00:18:08.875 "method": "bdev_nvme_attach_controller" 00:18:08.875 } 00:18:08.875 EOF 00:18:08.875 )") 00:18:08.875 18:23:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:18:08.875 18:23:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:08.875 18:23:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:08.875 { 00:18:08.875 "params": { 00:18:08.875 "name": "Nvme$subsystem", 00:18:08.875 "trtype": "$TEST_TRANSPORT", 00:18:08.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:08.875 "adrfam": "ipv4", 00:18:08.875 "trsvcid": "$NVMF_PORT", 00:18:08.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:08.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:08.875 "hdgst": ${hdgst:-false}, 00:18:08.875 "ddgst": ${ddgst:-false} 00:18:08.875 }, 00:18:08.875 "method": "bdev_nvme_attach_controller" 00:18:08.875 } 00:18:08.875 EOF 00:18:08.875 )") 00:18:08.875 18:23:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:18:08.875 18:23:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # jq . 00:18:08.875 18:23:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@583 -- # IFS=, 00:18:08.875 18:23:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:18:08.875 "params": { 00:18:08.875 "name": "Nvme1", 00:18:08.875 "trtype": "rdma", 00:18:08.875 "traddr": "192.168.100.8", 00:18:08.875 "adrfam": "ipv4", 00:18:08.875 "trsvcid": "4420", 00:18:08.875 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.875 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:08.875 "hdgst": false, 00:18:08.875 "ddgst": false 00:18:08.875 }, 00:18:08.875 "method": "bdev_nvme_attach_controller" 00:18:08.875 },{ 00:18:08.875 "params": { 00:18:08.875 "name": "Nvme2", 00:18:08.875 "trtype": "rdma", 00:18:08.875 "traddr": "192.168.100.8", 00:18:08.875 "adrfam": "ipv4", 00:18:08.875 "trsvcid": "4420", 00:18:08.875 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:08.875 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:08.875 "hdgst": false, 00:18:08.875 "ddgst": false 00:18:08.875 }, 00:18:08.875 "method": "bdev_nvme_attach_controller" 00:18:08.875 },{ 00:18:08.875 "params": { 00:18:08.875 "name": "Nvme3", 00:18:08.875 "trtype": "rdma", 00:18:08.875 "traddr": "192.168.100.8", 00:18:08.875 "adrfam": "ipv4", 00:18:08.875 "trsvcid": "4420", 00:18:08.875 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:18:08.875 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:18:08.875 "hdgst": false, 00:18:08.875 "ddgst": false 00:18:08.875 }, 00:18:08.875 "method": "bdev_nvme_attach_controller" 00:18:08.875 },{ 00:18:08.875 "params": { 00:18:08.875 "name": "Nvme4", 00:18:08.875 "trtype": "rdma", 00:18:08.875 "traddr": "192.168.100.8", 00:18:08.875 "adrfam": "ipv4", 00:18:08.875 "trsvcid": "4420", 00:18:08.875 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:18:08.875 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:18:08.875 "hdgst": false, 00:18:08.875 "ddgst": false 00:18:08.875 }, 00:18:08.875 "method": "bdev_nvme_attach_controller" 00:18:08.875 },{ 00:18:08.875 "params": { 00:18:08.875 "name": "Nvme5", 00:18:08.875 "trtype": "rdma", 00:18:08.875 "traddr": "192.168.100.8", 00:18:08.875 "adrfam": "ipv4", 00:18:08.875 "trsvcid": "4420", 00:18:08.875 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:18:08.875 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:18:08.875 "hdgst": false, 00:18:08.875 "ddgst": false 00:18:08.875 }, 00:18:08.875 "method": "bdev_nvme_attach_controller" 00:18:08.875 },{ 00:18:08.875 "params": { 00:18:08.875 "name": "Nvme6", 00:18:08.875 "trtype": "rdma", 00:18:08.875 "traddr": "192.168.100.8", 00:18:08.875 "adrfam": "ipv4", 00:18:08.875 "trsvcid": "4420", 00:18:08.875 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:18:08.875 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:18:08.875 "hdgst": false, 00:18:08.875 "ddgst": false 00:18:08.875 }, 00:18:08.875 "method": "bdev_nvme_attach_controller" 00:18:08.875 },{ 00:18:08.875 "params": { 00:18:08.875 "name": "Nvme7", 00:18:08.875 "trtype": "rdma", 00:18:08.875 "traddr": "192.168.100.8", 00:18:08.875 "adrfam": "ipv4", 00:18:08.875 "trsvcid": "4420", 00:18:08.875 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:18:08.875 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:18:08.875 "hdgst": false, 00:18:08.875 "ddgst": false 00:18:08.875 }, 00:18:08.875 "method": "bdev_nvme_attach_controller" 00:18:08.875 },{ 00:18:08.875 "params": { 00:18:08.875 "name": "Nvme8", 00:18:08.875 "trtype": "rdma", 00:18:08.875 "traddr": "192.168.100.8", 00:18:08.875 "adrfam": "ipv4", 00:18:08.875 "trsvcid": "4420", 00:18:08.875 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:18:08.875 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:18:08.875 "hdgst": false, 00:18:08.875 "ddgst": false 00:18:08.875 }, 00:18:08.875 "method": "bdev_nvme_attach_controller" 00:18:08.875 },{ 00:18:08.875 "params": { 00:18:08.875 "name": "Nvme9", 00:18:08.875 "trtype": "rdma", 00:18:08.875 "traddr": "192.168.100.8", 00:18:08.875 "adrfam": "ipv4", 00:18:08.875 "trsvcid": "4420", 00:18:08.875 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:18:08.875 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:18:08.875 "hdgst": false, 00:18:08.875 "ddgst": false 00:18:08.875 }, 00:18:08.875 "method": "bdev_nvme_attach_controller" 00:18:08.875 },{ 00:18:08.875 "params": { 00:18:08.875 "name": "Nvme10", 00:18:08.875 "trtype": "rdma", 00:18:08.875 "traddr": "192.168.100.8", 00:18:08.875 "adrfam": "ipv4", 00:18:08.875 "trsvcid": "4420", 00:18:08.875 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:18:08.875 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:18:08.875 "hdgst": false, 00:18:08.875 "ddgst": false 00:18:08.875 }, 00:18:08.875 "method": "bdev_nvme_attach_controller" 00:18:08.875 }' 00:18:09.134 [2024-10-08 18:23:22.091517] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.134 [2024-10-08 18:23:22.173570] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.072 Running I/O for 10 seconds... 00:18:10.072 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:10.073 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:18:10.073 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:18:10.073 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.073 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:10.073 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.073 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:18:10.073 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:18:10.073 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:18:10.073 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:18:10.073 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:18:10.073 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:18:10.073 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:18:10.073 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:18:10.073 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:18:10.073 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.073 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:10.332 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.332 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=19 00:18:10.332 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 19 -ge 100 ']' 00:18:10.332 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:18:10.592 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:18:10.592 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:18:10.592 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:18:10.592 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:18:10.592 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.592 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:10.592 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.592 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=170 00:18:10.592 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 170 -ge 100 ']' 00:18:10.592 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:18:10.592 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:18:10.592 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:18:10.592 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3438453 00:18:10.592 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 3438453 ']' 00:18:10.592 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 3438453 00:18:10.851 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:18:10.851 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:10.851 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3438453 00:18:10.851 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:10.851 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:10.851 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3438453' 00:18:10.851 killing process with pid 3438453 00:18:10.851 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 3438453 00:18:10.851 18:23:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 3438453 00:18:10.851 Received shutdown signal, test time was about 0.842640 seconds 00:18:10.851 00:18:10.851 Latency(us) 00:18:10.851 [2024-10-08T16:23:24.024Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.851 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:10.851 Verification LBA range: start 0x0 length 0x400 00:18:10.851 Nvme1n1 : 0.83 356.77 22.30 0.00 0.00 175988.02 6012.22 225215.89 00:18:10.851 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:10.851 Verification LBA range: start 0x0 length 0x400 00:18:10.852 Nvme2n1 : 0.83 366.93 22.93 0.00 0.00 168471.76 8377.21 217009.64 00:18:10.852 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:10.852 Verification LBA range: start 0x0 length 0x400 00:18:10.852 Nvme3n1 : 0.83 385.68 24.11 0.00 0.00 156790.56 7123.48 152271.47 00:18:10.852 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:10.852 Verification LBA range: start 0x0 length 0x400 00:18:10.852 Nvme4n1 : 0.83 385.09 24.07 0.00 0.00 153989.30 9175.04 144977.03 00:18:10.852 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:10.852 Verification LBA range: start 0x0 length 0x400 00:18:10.852 Nvme5n1 : 0.83 384.34 24.02 0.00 0.00 151900.87 9915.88 132211.76 00:18:10.852 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:10.852 Verification LBA range: start 0x0 length 0x400 00:18:10.852 Nvme6n1 : 0.83 383.80 23.99 0.00 0.00 148230.05 10371.78 124917.31 00:18:10.852 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:10.852 Verification LBA range: start 0x0 length 0x400 00:18:10.852 Nvme7n1 : 0.83 383.27 23.95 0.00 0.00 145316.11 10542.75 117622.87 00:18:10.852 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:10.852 Verification LBA range: start 0x0 length 0x400 00:18:10.852 Nvme8n1 : 0.84 382.65 23.92 0.00 0.00 142842.21 10884.67 108048.92 00:18:10.852 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:10.852 Verification LBA range: start 0x0 length 0x400 00:18:10.852 Nvme9n1 : 0.84 381.95 23.87 0.00 0.00 140552.46 11568.53 95283.65 00:18:10.852 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:10.852 Verification LBA range: start 0x0 length 0x400 00:18:10.852 Nvme10n1 : 0.84 304.05 19.00 0.00 0.00 172606.61 3020.35 230686.72 00:18:10.852 [2024-10-08T16:23:24.025Z] =================================================================================================================== 00:18:10.852 [2024-10-08T16:23:24.025Z] Total : 3714.51 232.16 0.00 0.00 155088.23 3020.35 230686.72 00:18:11.111 18:23:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:18:12.050 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3438127 00:18:12.050 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:18:12.050 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:18:12.050 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:18:12.050 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:12.050 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:18:12.050 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:12.050 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:18:12.050 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:12.050 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:12.050 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:18:12.050 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:12.050 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:12.310 rmmod nvme_rdma 00:18:12.310 rmmod nvme_fabrics 00:18:12.310 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:12.310 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:18:12.310 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:18:12.310 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@515 -- # '[' -n 3438127 ']' 00:18:12.310 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # killprocess 3438127 00:18:12.310 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 3438127 ']' 00:18:12.310 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 3438127 00:18:12.310 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:18:12.310 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:12.310 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3438127 00:18:12.310 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:12.310 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:12.310 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3438127' 00:18:12.310 killing process with pid 3438127 00:18:12.310 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 3438127 00:18:12.310 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 3438127 00:18:12.880 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:12.880 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:18:12.880 00:18:12.880 real 0m5.843s 00:18:12.880 user 0m23.321s 00:18:12.880 sys 0m1.318s 00:18:12.880 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:12.880 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:12.880 ************************************ 00:18:12.880 END TEST nvmf_shutdown_tc2 00:18:12.880 ************************************ 00:18:12.880 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:18:12.880 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:12.880 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:12.880 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:18:12.880 ************************************ 00:18:12.880 START TEST nvmf_shutdown_tc3 00:18:12.880 ************************************ 00:18:12.880 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:18:12.880 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:18:12.880 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:18:12.880 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:18:12.880 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:12.880 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:12.880 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:18:12.881 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:18:12.881 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:18:12.881 Found net devices under 0000:18:00.0: mlx_0_0 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:18:12.881 Found net devices under 0000:18:00.1: mlx_0_1 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # is_hw=yes 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # rdma_device_init 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # uname 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@528 -- # allocate_nic_ips 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:12.881 18:23:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:12.881 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:12.881 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:12.881 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:12.881 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:12.882 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:12.882 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:18:12.882 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:12.882 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:12.882 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:12.882 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:12.882 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:12.882 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:12.882 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:18:12.882 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:12.882 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:12.882 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:12.882 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:12.882 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:12.882 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:12.882 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:12.882 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:12.882 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:12.882 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:12.882 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:18:12.882 altname enp24s0f0np0 00:18:12.882 altname ens785f0np0 00:18:12.882 inet 192.168.100.8/24 scope global mlx_0_0 00:18:12.882 valid_lft forever preferred_lft forever 00:18:12.882 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:12.882 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:12.882 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:13.141 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:13.142 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:13.142 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:18:13.142 altname enp24s0f1np1 00:18:13.142 altname ens785f1np1 00:18:13.142 inet 192.168.100.9/24 scope global mlx_0_1 00:18:13.142 valid_lft forever preferred_lft forever 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # return 0 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:18:13.142 192.168.100.9' 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:18:13.142 192.168.100.9' 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # head -n 1 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:18:13.142 192.168.100.9' 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # tail -n +2 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # head -n 1 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # nvmfpid=3439111 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # waitforlisten 3439111 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3439111 ']' 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:13.142 18:23:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:13.142 [2024-10-08 18:23:26.239861] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:18:13.142 [2024-10-08 18:23:26.239926] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:13.402 [2024-10-08 18:23:26.327838] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:13.402 [2024-10-08 18:23:26.417558] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:13.402 [2024-10-08 18:23:26.417602] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:13.402 [2024-10-08 18:23:26.417612] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:13.402 [2024-10-08 18:23:26.417624] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:13.402 [2024-10-08 18:23:26.417632] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:13.402 [2024-10-08 18:23:26.419120] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:18:13.402 [2024-10-08 18:23:26.419221] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:18:13.402 [2024-10-08 18:23:26.419325] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:13.402 [2024-10-08 18:23:26.419326] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:18:13.971 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:13.971 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:18:13.971 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:13.971 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:13.971 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:14.231 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:14.231 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:14.231 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.231 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:14.231 [2024-10-08 18:23:27.175912] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1bf25e0/0x1bf6ad0) succeed. 00:18:14.231 [2024-10-08 18:23:27.186393] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1bf3c20/0x1c38170) succeed. 00:18:14.231 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.231 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:18:14.231 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:18:14.231 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:14.231 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:14.231 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:14.231 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:14.231 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:18:14.231 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:14.231 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:18:14.231 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:14.232 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:18:14.232 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:14.232 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:18:14.232 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:14.232 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:18:14.232 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:14.232 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:18:14.232 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:14.232 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:18:14.232 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:14.232 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:18:14.232 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:14.232 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:18:14.232 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:14.232 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:18:14.232 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:18:14.232 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.232 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:14.232 Malloc1 00:18:14.491 [2024-10-08 18:23:27.410131] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:14.491 Malloc2 00:18:14.491 Malloc3 00:18:14.491 Malloc4 00:18:14.491 Malloc5 00:18:14.491 Malloc6 00:18:14.491 Malloc7 00:18:14.751 Malloc8 00:18:14.751 Malloc9 00:18:14.751 Malloc10 00:18:14.751 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.751 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:18:14.751 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:14.751 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:14.751 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3439432 00:18:14.751 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3439432 /var/tmp/bdevperf.sock 00:18:14.751 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3439432 ']' 00:18:14.751 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:14.751 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:14.751 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:18:14.751 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:14.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:14.751 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:18:14.751 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:14.751 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:14.751 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config=() 00:18:14.751 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # local subsystem config 00:18:14.751 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:14.751 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:14.751 { 00:18:14.751 "params": { 00:18:14.751 "name": "Nvme$subsystem", 00:18:14.751 "trtype": "$TEST_TRANSPORT", 00:18:14.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:14.751 "adrfam": "ipv4", 00:18:14.751 "trsvcid": "$NVMF_PORT", 00:18:14.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:14.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:14.751 "hdgst": ${hdgst:-false}, 00:18:14.751 "ddgst": ${ddgst:-false} 00:18:14.751 }, 00:18:14.751 "method": "bdev_nvme_attach_controller" 00:18:14.751 } 00:18:14.751 EOF 00:18:14.751 )") 00:18:14.751 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:18:14.751 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:14.751 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:14.751 { 00:18:14.751 "params": { 00:18:14.751 "name": "Nvme$subsystem", 00:18:14.751 "trtype": "$TEST_TRANSPORT", 00:18:14.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:14.751 "adrfam": "ipv4", 00:18:14.751 "trsvcid": "$NVMF_PORT", 00:18:14.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:14.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:14.751 "hdgst": ${hdgst:-false}, 00:18:14.751 "ddgst": ${ddgst:-false} 00:18:14.751 }, 00:18:14.751 "method": "bdev_nvme_attach_controller" 00:18:14.751 } 00:18:14.751 EOF 00:18:14.751 )") 00:18:14.751 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:18:14.752 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:14.752 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:14.752 { 00:18:14.752 "params": { 00:18:14.752 "name": "Nvme$subsystem", 00:18:14.752 "trtype": "$TEST_TRANSPORT", 00:18:14.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:14.752 "adrfam": "ipv4", 00:18:14.752 "trsvcid": "$NVMF_PORT", 00:18:14.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:14.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:14.752 "hdgst": ${hdgst:-false}, 00:18:14.752 "ddgst": ${ddgst:-false} 00:18:14.752 }, 00:18:14.752 "method": "bdev_nvme_attach_controller" 00:18:14.752 } 00:18:14.752 EOF 00:18:14.752 )") 00:18:14.752 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:18:14.752 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:14.752 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:14.752 { 00:18:14.752 "params": { 00:18:14.752 "name": "Nvme$subsystem", 00:18:14.752 "trtype": "$TEST_TRANSPORT", 00:18:14.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:14.752 "adrfam": "ipv4", 00:18:14.752 "trsvcid": "$NVMF_PORT", 00:18:14.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:14.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:14.752 "hdgst": ${hdgst:-false}, 00:18:14.752 "ddgst": ${ddgst:-false} 00:18:14.752 }, 00:18:14.752 "method": "bdev_nvme_attach_controller" 00:18:14.752 } 00:18:14.752 EOF 00:18:14.752 )") 00:18:14.752 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:18:14.752 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:14.752 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:14.752 { 00:18:14.752 "params": { 00:18:14.752 "name": "Nvme$subsystem", 00:18:14.752 "trtype": "$TEST_TRANSPORT", 00:18:14.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:14.752 "adrfam": "ipv4", 00:18:14.752 "trsvcid": "$NVMF_PORT", 00:18:14.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:14.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:14.752 "hdgst": ${hdgst:-false}, 00:18:14.752 "ddgst": ${ddgst:-false} 00:18:14.752 }, 00:18:14.752 "method": "bdev_nvme_attach_controller" 00:18:14.752 } 00:18:14.752 EOF 00:18:14.752 )") 00:18:14.752 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:18:14.752 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:14.752 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:14.752 { 00:18:14.752 "params": { 00:18:14.752 "name": "Nvme$subsystem", 00:18:14.752 "trtype": "$TEST_TRANSPORT", 00:18:14.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:14.752 "adrfam": "ipv4", 00:18:14.752 "trsvcid": "$NVMF_PORT", 00:18:14.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:14.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:14.752 "hdgst": ${hdgst:-false}, 00:18:14.752 "ddgst": ${ddgst:-false} 00:18:14.752 }, 00:18:14.752 "method": "bdev_nvme_attach_controller" 00:18:14.752 } 00:18:14.752 EOF 00:18:14.752 )") 00:18:14.752 [2024-10-08 18:23:27.911628] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:18:14.752 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:18:14.752 [2024-10-08 18:23:27.911691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3439432 ] 00:18:14.752 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:14.752 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:14.752 { 00:18:14.752 "params": { 00:18:14.752 "name": "Nvme$subsystem", 00:18:14.752 "trtype": "$TEST_TRANSPORT", 00:18:14.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:14.752 "adrfam": "ipv4", 00:18:14.752 "trsvcid": "$NVMF_PORT", 00:18:14.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:14.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:14.752 "hdgst": ${hdgst:-false}, 00:18:14.752 "ddgst": ${ddgst:-false} 00:18:14.752 }, 00:18:14.752 "method": "bdev_nvme_attach_controller" 00:18:14.752 } 00:18:14.752 EOF 00:18:14.752 )") 00:18:14.752 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:18:15.012 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:15.012 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:15.012 { 00:18:15.012 "params": { 00:18:15.012 "name": "Nvme$subsystem", 00:18:15.012 "trtype": "$TEST_TRANSPORT", 00:18:15.012 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:15.012 "adrfam": "ipv4", 00:18:15.012 "trsvcid": "$NVMF_PORT", 00:18:15.012 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:15.012 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:15.012 "hdgst": ${hdgst:-false}, 00:18:15.012 "ddgst": ${ddgst:-false} 00:18:15.012 }, 00:18:15.012 "method": "bdev_nvme_attach_controller" 00:18:15.012 } 00:18:15.012 EOF 00:18:15.012 )") 00:18:15.012 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:18:15.012 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:15.012 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:15.012 { 00:18:15.012 "params": { 00:18:15.012 "name": "Nvme$subsystem", 00:18:15.012 "trtype": "$TEST_TRANSPORT", 00:18:15.012 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:15.012 "adrfam": "ipv4", 00:18:15.012 "trsvcid": "$NVMF_PORT", 00:18:15.012 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:15.012 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:15.012 "hdgst": ${hdgst:-false}, 00:18:15.012 "ddgst": ${ddgst:-false} 00:18:15.012 }, 00:18:15.012 "method": "bdev_nvme_attach_controller" 00:18:15.012 } 00:18:15.012 EOF 00:18:15.012 )") 00:18:15.012 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:18:15.012 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:15.012 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:15.012 { 00:18:15.012 "params": { 00:18:15.012 "name": "Nvme$subsystem", 00:18:15.012 "trtype": "$TEST_TRANSPORT", 00:18:15.012 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:15.012 "adrfam": "ipv4", 00:18:15.012 "trsvcid": "$NVMF_PORT", 00:18:15.012 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:15.012 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:15.012 "hdgst": ${hdgst:-false}, 00:18:15.012 "ddgst": ${ddgst:-false} 00:18:15.012 }, 00:18:15.012 "method": "bdev_nvme_attach_controller" 00:18:15.012 } 00:18:15.012 EOF 00:18:15.012 )") 00:18:15.012 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:18:15.012 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # jq . 00:18:15.012 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@583 -- # IFS=, 00:18:15.012 18:23:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:18:15.012 "params": { 00:18:15.012 "name": "Nvme1", 00:18:15.012 "trtype": "rdma", 00:18:15.012 "traddr": "192.168.100.8", 00:18:15.012 "adrfam": "ipv4", 00:18:15.012 "trsvcid": "4420", 00:18:15.012 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.012 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:15.012 "hdgst": false, 00:18:15.012 "ddgst": false 00:18:15.012 }, 00:18:15.012 "method": "bdev_nvme_attach_controller" 00:18:15.012 },{ 00:18:15.012 "params": { 00:18:15.012 "name": "Nvme2", 00:18:15.012 "trtype": "rdma", 00:18:15.012 "traddr": "192.168.100.8", 00:18:15.012 "adrfam": "ipv4", 00:18:15.012 "trsvcid": "4420", 00:18:15.012 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:15.012 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:15.012 "hdgst": false, 00:18:15.012 "ddgst": false 00:18:15.012 }, 00:18:15.012 "method": "bdev_nvme_attach_controller" 00:18:15.012 },{ 00:18:15.012 "params": { 00:18:15.012 "name": "Nvme3", 00:18:15.012 "trtype": "rdma", 00:18:15.013 "traddr": "192.168.100.8", 00:18:15.013 "adrfam": "ipv4", 00:18:15.013 "trsvcid": "4420", 00:18:15.013 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:18:15.013 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:18:15.013 "hdgst": false, 00:18:15.013 "ddgst": false 00:18:15.013 }, 00:18:15.013 "method": "bdev_nvme_attach_controller" 00:18:15.013 },{ 00:18:15.013 "params": { 00:18:15.013 "name": "Nvme4", 00:18:15.013 "trtype": "rdma", 00:18:15.013 "traddr": "192.168.100.8", 00:18:15.013 "adrfam": "ipv4", 00:18:15.013 "trsvcid": "4420", 00:18:15.013 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:18:15.013 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:18:15.013 "hdgst": false, 00:18:15.013 "ddgst": false 00:18:15.013 }, 00:18:15.013 "method": "bdev_nvme_attach_controller" 00:18:15.013 },{ 00:18:15.013 "params": { 00:18:15.013 "name": "Nvme5", 00:18:15.013 "trtype": "rdma", 00:18:15.013 "traddr": "192.168.100.8", 00:18:15.013 "adrfam": "ipv4", 00:18:15.013 "trsvcid": "4420", 00:18:15.013 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:18:15.013 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:18:15.013 "hdgst": false, 00:18:15.013 "ddgst": false 00:18:15.013 }, 00:18:15.013 "method": "bdev_nvme_attach_controller" 00:18:15.013 },{ 00:18:15.013 "params": { 00:18:15.013 "name": "Nvme6", 00:18:15.013 "trtype": "rdma", 00:18:15.013 "traddr": "192.168.100.8", 00:18:15.013 "adrfam": "ipv4", 00:18:15.013 "trsvcid": "4420", 00:18:15.013 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:18:15.013 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:18:15.013 "hdgst": false, 00:18:15.013 "ddgst": false 00:18:15.013 }, 00:18:15.013 "method": "bdev_nvme_attach_controller" 00:18:15.013 },{ 00:18:15.013 "params": { 00:18:15.013 "name": "Nvme7", 00:18:15.013 "trtype": "rdma", 00:18:15.013 "traddr": "192.168.100.8", 00:18:15.013 "adrfam": "ipv4", 00:18:15.013 "trsvcid": "4420", 00:18:15.013 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:18:15.013 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:18:15.013 "hdgst": false, 00:18:15.013 "ddgst": false 00:18:15.013 }, 00:18:15.013 "method": "bdev_nvme_attach_controller" 00:18:15.013 },{ 00:18:15.013 "params": { 00:18:15.013 "name": "Nvme8", 00:18:15.013 "trtype": "rdma", 00:18:15.013 "traddr": "192.168.100.8", 00:18:15.013 "adrfam": "ipv4", 00:18:15.013 "trsvcid": "4420", 00:18:15.013 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:18:15.013 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:18:15.013 "hdgst": false, 00:18:15.013 "ddgst": false 00:18:15.013 }, 00:18:15.013 "method": "bdev_nvme_attach_controller" 00:18:15.013 },{ 00:18:15.013 "params": { 00:18:15.013 "name": "Nvme9", 00:18:15.013 "trtype": "rdma", 00:18:15.013 "traddr": "192.168.100.8", 00:18:15.013 "adrfam": "ipv4", 00:18:15.013 "trsvcid": "4420", 00:18:15.013 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:18:15.013 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:18:15.013 "hdgst": false, 00:18:15.013 "ddgst": false 00:18:15.013 }, 00:18:15.013 "method": "bdev_nvme_attach_controller" 00:18:15.013 },{ 00:18:15.013 "params": { 00:18:15.013 "name": "Nvme10", 00:18:15.013 "trtype": "rdma", 00:18:15.013 "traddr": "192.168.100.8", 00:18:15.013 "adrfam": "ipv4", 00:18:15.013 "trsvcid": "4420", 00:18:15.013 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:18:15.013 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:18:15.013 "hdgst": false, 00:18:15.013 "ddgst": false 00:18:15.013 }, 00:18:15.013 "method": "bdev_nvme_attach_controller" 00:18:15.013 }' 00:18:15.013 [2024-10-08 18:23:28.002701] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.013 [2024-10-08 18:23:28.084649] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.954 Running I/O for 10 seconds... 00:18:15.954 18:23:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:15.954 18:23:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:18:15.954 18:23:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:18:15.954 18:23:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.954 18:23:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:15.954 18:23:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.954 18:23:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:15.954 18:23:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:18:15.954 18:23:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:18:15.954 18:23:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:18:15.954 18:23:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:18:15.954 18:23:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:18:15.954 18:23:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:18:15.954 18:23:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:18:15.954 18:23:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:18:15.954 18:23:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:18:15.954 18:23:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.954 18:23:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:16.213 18:23:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.213 18:23:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=26 00:18:16.213 18:23:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 26 -ge 100 ']' 00:18:16.213 18:23:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:18:16.472 18:23:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:18:16.472 18:23:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:18:16.472 18:23:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:18:16.472 18:23:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:18:16.472 18:23:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.472 18:23:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:16.732 18:23:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.732 18:23:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=179 00:18:16.732 18:23:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 179 -ge 100 ']' 00:18:16.732 18:23:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:18:16.732 18:23:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:18:16.732 18:23:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:18:16.732 18:23:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3439111 00:18:16.732 18:23:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 3439111 ']' 00:18:16.732 18:23:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 3439111 00:18:16.732 18:23:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:18:16.732 18:23:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:16.732 18:23:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3439111 00:18:16.732 18:23:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:16.732 18:23:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:16.732 18:23:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3439111' 00:18:16.732 killing process with pid 3439111 00:18:16.732 18:23:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 3439111 00:18:16.732 18:23:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 3439111 00:18:17.251 2751.00 IOPS, 171.94 MiB/s [2024-10-08T16:23:30.424Z] 18:23:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:18:17.830 [2024-10-08 18:23:30.788878] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200013802bc0 was disconnected and freed. reset controller. 00:18:17.830 [2024-10-08 18:23:30.789203] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:17.830 [2024-10-08 18:23:30.791699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:17.830 [2024-10-08 18:23:30.792010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.830 [2024-10-08 18:23:30.792050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.830 [2024-10-08 18:23:30.792080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.830 [2024-10-08 18:23:30.792106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.830 [2024-10-08 18:23:30.792134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.830 [2024-10-08 18:23:30.792160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.830 [2024-10-08 18:23:30.792187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.830 [2024-10-08 18:23:30.792214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.830 [2024-10-08 18:23:30.794584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:17.830 [2024-10-08 18:23:30.794627] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:18:17.830 [2024-10-08 18:23:30.797317] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:18:17.830 [2024-10-08 18:23:30.797368] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:18:17.830 [2024-10-08 18:23:30.797391] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019aed000 00:18:17.830 [2024-10-08 18:23:30.802055] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:17.830 [2024-10-08 18:23:30.812080] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:17.830 [2024-10-08 18:23:30.822091] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:17.830 [2024-10-08 18:23:30.832106] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:17.830 [2024-10-08 18:23:30.842144] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:17.830 [2024-10-08 18:23:30.847480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001065f000 len:0x10000 key:0x184500 00:18:17.830 [2024-10-08 18:23:30.847500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.830 [2024-10-08 18:23:30.847538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001063e000 len:0x10000 key:0x184500 00:18:17.830 [2024-10-08 18:23:30.847549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.830 [2024-10-08 18:23:30.847562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001061d000 len:0x10000 key:0x184500 00:18:17.830 [2024-10-08 18:23:30.847571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.830 [2024-10-08 18:23:30.847585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000105fc000 len:0x10000 key:0x184500 00:18:17.830 [2024-10-08 18:23:30.847594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.830 [2024-10-08 18:23:30.847607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000105db000 len:0x10000 key:0x184500 00:18:17.830 [2024-10-08 18:23:30.847616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.830 [2024-10-08 18:23:30.847629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000105ba000 len:0x10000 key:0x184500 00:18:17.831 [2024-10-08 18:23:30.847637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.831 [2024-10-08 18:23:30.847650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010599000 len:0x10000 key:0x184500 00:18:17.831 [2024-10-08 18:23:30.847658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.831 [2024-10-08 18:23:30.847671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010578000 len:0x10000 key:0x184500 00:18:17.831 [2024-10-08 18:23:30.847679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.831 [2024-10-08 18:23:30.847692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010557000 len:0x10000 key:0x184500 00:18:17.831 [2024-10-08 18:23:30.847702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.831 [2024-10-08 18:23:30.847714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010536000 len:0x10000 key:0x184500 00:18:17.831 [2024-10-08 18:23:30.847723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.831 [2024-10-08 18:23:30.847736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010515000 len:0x10000 key:0x184500 00:18:17.831 [2024-10-08 18:23:30.847745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.831 [2024-10-08 18:23:30.847757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000104f4000 len:0x10000 key:0x184500 00:18:17.831 [2024-10-08 18:23:30.847767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.831 [2024-10-08 18:23:30.847781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000104d3000 len:0x10000 key:0x184500 00:18:17.831 [2024-10-08 18:23:30.847790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.831 [2024-10-08 18:23:30.847802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000104b2000 len:0x10000 key:0x184500 00:18:17.831 [2024-10-08 18:23:30.847811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.831 [2024-10-08 18:23:30.847823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010491000 len:0x10000 key:0x184500 00:18:17.831 [2024-10-08 18:23:30.847832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.831 [2024-10-08 18:23:30.847845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010470000 len:0x10000 key:0x184500 00:18:17.831 [2024-10-08 18:23:30.847854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.831 [2024-10-08 18:23:30.847866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011973000 len:0x10000 key:0x184500 00:18:17.831 [2024-10-08 18:23:30.847875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.831 [2024-10-08 18:23:30.847887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011952000 len:0x10000 key:0x184500 00:18:17.831 [2024-10-08 18:23:30.847897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.831 [2024-10-08 18:23:30.847910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012216000 len:0x10000 key:0x184500 00:18:17.831 [2024-10-08 18:23:30.847919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.831 [2024-10-08 18:23:30.847932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000121f5000 len:0x10000 key:0x184500 00:18:17.831 [2024-10-08 18:23:30.847941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.831 [2024-10-08 18:23:30.847954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000121d4000 len:0x10000 key:0x184500 00:18:17.831 [2024-10-08 18:23:30.847963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.831 [2024-10-08 18:23:30.847976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000121b3000 len:0x10000 key:0x184500 00:18:17.831 [2024-10-08 18:23:30.847984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.831 [2024-10-08 18:23:30.847996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012192000 len:0x10000 key:0x184500 00:18:17.831 [2024-10-08 18:23:30.848011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.831 [2024-10-08 18:23:30.848026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012171000 len:0x10000 key:0x184500 00:18:17.831 [2024-10-08 18:23:30.848035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.831 [2024-10-08 18:23:30.848047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f5df000 len:0x10000 key:0x184500 00:18:17.831 [2024-10-08 18:23:30.848057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.831 [2024-10-08 18:23:30.848070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000135ae000 len:0x10000 key:0x184500 00:18:17.831 [2024-10-08 18:23:30.848078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.831 [2024-10-08 18:23:30.848091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001358d000 len:0x10000 key:0x184500 00:18:17.831 [2024-10-08 18:23:30.848099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.831 [2024-10-08 18:23:30.848112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001356c000 len:0x10000 key:0x184500 00:18:17.831 [2024-10-08 18:23:30.848121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.831 [2024-10-08 18:23:30.848134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001354b000 len:0x10000 key:0x184500 00:18:17.831 [2024-10-08 18:23:30.848143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.831 [2024-10-08 18:23:30.848155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001352a000 len:0x10000 key:0x184500 00:18:17.831 [2024-10-08 18:23:30.848164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.831 [2024-10-08 18:23:30.848177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013509000 len:0x10000 key:0x184500 00:18:17.831 [2024-10-08 18:23:30.848186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.831 [2024-10-08 18:23:30.848198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000134e8000 len:0x10000 key:0x184500 00:18:17.831 [2024-10-08 18:23:30.848207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.831 [2024-10-08 18:23:30.848222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010f23000 len:0x10000 key:0x184500 00:18:17.831 [2024-10-08 18:23:30.848233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.831 [2024-10-08 18:23:30.848246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010f02000 len:0x10000 key:0x184500 00:18:17.831 [2024-10-08 18:23:30.848255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.831 [2024-10-08 18:23:30.848267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f495000 len:0x10000 key:0x184500 00:18:17.831 [2024-10-08 18:23:30.848278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.831 [2024-10-08 18:23:30.848291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f474000 len:0x10000 key:0x184500 00:18:17.831 [2024-10-08 18:23:30.848300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.831 [2024-10-08 18:23:30.848312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f453000 len:0x10000 key:0x184500 00:18:17.831 [2024-10-08 18:23:30.848321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.831 [2024-10-08 18:23:30.848334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f432000 len:0x10000 key:0x184500 00:18:17.831 [2024-10-08 18:23:30.848344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.831 [2024-10-08 18:23:30.848357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f411000 len:0x10000 key:0x184500 00:18:17.831 [2024-10-08 18:23:30.848367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.831 [2024-10-08 18:23:30.848381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011700000 len:0x10000 key:0x184500 00:18:17.831 [2024-10-08 18:23:30.848389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.831 [2024-10-08 18:23:30.848402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011721000 len:0x10000 key:0x184500 00:18:17.831 [2024-10-08 18:23:30.848411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.831 [2024-10-08 18:23:30.848424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010db8000 len:0x10000 key:0x184500 00:18:17.831 [2024-10-08 18:23:30.848433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.831 [2024-10-08 18:23:30.848445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ebb0000 len:0x10000 key:0x184500 00:18:17.831 [2024-10-08 18:23:30.848453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.831 [2024-10-08 18:23:30.848466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ebd1000 len:0x10000 key:0x184500 00:18:17.832 [2024-10-08 18:23:30.848476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.832 [2024-10-08 18:23:30.848488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ebf2000 len:0x10000 key:0x184500 00:18:17.832 [2024-10-08 18:23:30.848497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.832 [2024-10-08 18:23:30.848509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec13000 len:0x10000 key:0x184500 00:18:17.832 [2024-10-08 18:23:30.848520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.832 [2024-10-08 18:23:30.848532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec34000 len:0x10000 key:0x184500 00:18:17.832 [2024-10-08 18:23:30.848542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.832 [2024-10-08 18:23:30.848554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec55000 len:0x10000 key:0x184500 00:18:17.832 [2024-10-08 18:23:30.848562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.832 [2024-10-08 18:23:30.848577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec76000 len:0x10000 key:0x184500 00:18:17.832 [2024-10-08 18:23:30.848586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.832 [2024-10-08 18:23:30.848598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000115b6000 len:0x10000 key:0x184500 00:18:17.832 [2024-10-08 18:23:30.848607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.832 [2024-10-08 18:23:30.848620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011595000 len:0x10000 key:0x184500 00:18:17.832 [2024-10-08 18:23:30.848629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.832 [2024-10-08 18:23:30.848642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011574000 len:0x10000 key:0x184500 00:18:17.832 [2024-10-08 18:23:30.848651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.832 [2024-10-08 18:23:30.848663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011553000 len:0x10000 key:0x184500 00:18:17.832 [2024-10-08 18:23:30.848672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.832 [2024-10-08 18:23:30.848684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011532000 len:0x10000 key:0x184500 00:18:17.832 [2024-10-08 18:23:30.848693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.832 [2024-10-08 18:23:30.848705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011511000 len:0x10000 key:0x184500 00:18:17.832 [2024-10-08 18:23:30.848714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.832 [2024-10-08 18:23:30.848726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000114f0000 len:0x10000 key:0x184500 00:18:17.832 [2024-10-08 18:23:30.848735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.832 [2024-10-08 18:23:30.848747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b5ef000 len:0x10000 key:0x184500 00:18:17.832 [2024-10-08 18:23:30.848756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.832 [2024-10-08 18:23:30.848770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b5ce000 len:0x10000 key:0x184500 00:18:17.832 [2024-10-08 18:23:30.848779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.832 [2024-10-08 18:23:30.848792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b5ad000 len:0x10000 key:0x184500 00:18:17.832 [2024-10-08 18:23:30.848800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.832 [2024-10-08 18:23:30.848813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b58c000 len:0x10000 key:0x184500 00:18:17.832 [2024-10-08 18:23:30.848822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.832 [2024-10-08 18:23:30.848835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b56b000 len:0x10000 key:0x184500 00:18:17.832 [2024-10-08 18:23:30.848844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.832 [2024-10-08 18:23:30.848856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b54a000 len:0x10000 key:0x184500 00:18:17.832 [2024-10-08 18:23:30.848865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.832 [2024-10-08 18:23:30.848877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b529000 len:0x10000 key:0x184500 00:18:17.832 [2024-10-08 18:23:30.848886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.832 [2024-10-08 18:23:30.848898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b508000 len:0x10000 key:0x184500 00:18:17.832 [2024-10-08 18:23:30.848906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.832 [2024-10-08 18:23:30.851426] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:18:17.832 [2024-10-08 18:23:30.852290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.832 [2024-10-08 18:23:30.852322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:15635 cdw0:0 sqhd:649e p:0 m:0 dnr:0 00:18:17.832 [2024-10-08 18:23:30.852346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.832 [2024-10-08 18:23:30.852367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:15635 cdw0:0 sqhd:649e p:0 m:0 dnr:0 00:18:17.832 [2024-10-08 18:23:30.852390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.832 [2024-10-08 18:23:30.852414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:15635 cdw0:0 sqhd:649e p:0 m:0 dnr:0 00:18:17.832 [2024-10-08 18:23:30.852437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.832 [2024-10-08 18:23:30.852457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:15635 cdw0:0 sqhd:649e p:0 m:0 dnr:0 00:18:17.832 [2024-10-08 18:23:30.854909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:17.832 [2024-10-08 18:23:30.854952] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:17.832 [2024-10-08 18:23:30.855014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.832 [2024-10-08 18:23:30.855049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61090 cdw0:0 sqhd:c300 p:1 m:1 dnr:0 00:18:17.832 [2024-10-08 18:23:30.855082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.832 [2024-10-08 18:23:30.855114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61090 cdw0:0 sqhd:c300 p:1 m:1 dnr:0 00:18:17.832 [2024-10-08 18:23:30.855147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.832 [2024-10-08 18:23:30.855177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61090 cdw0:0 sqhd:c300 p:1 m:1 dnr:0 00:18:17.832 [2024-10-08 18:23:30.855210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.832 [2024-10-08 18:23:30.855240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61090 cdw0:0 sqhd:c300 p:1 m:1 dnr:0 00:18:17.832 [2024-10-08 18:23:30.857090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:17.832 [2024-10-08 18:23:30.857132] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:18:17.832 [2024-10-08 18:23:30.857182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.832 [2024-10-08 18:23:30.857215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61090 cdw0:0 sqhd:c300 p:1 m:1 dnr:0 00:18:17.832 [2024-10-08 18:23:30.857248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.832 [2024-10-08 18:23:30.857280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61090 cdw0:0 sqhd:c300 p:1 m:1 dnr:0 00:18:17.832 [2024-10-08 18:23:30.857313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.832 [2024-10-08 18:23:30.857345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61090 cdw0:0 sqhd:c300 p:1 m:1 dnr:0 00:18:17.832 [2024-10-08 18:23:30.857377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.832 [2024-10-08 18:23:30.857408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61090 cdw0:0 sqhd:c300 p:1 m:1 dnr:0 00:18:17.832 [2024-10-08 18:23:30.859993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:17.832 [2024-10-08 18:23:30.860044] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:18:17.832 [2024-10-08 18:23:30.860100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.832 [2024-10-08 18:23:30.860134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61090 cdw0:0 sqhd:c300 p:1 m:1 dnr:0 00:18:17.832 [2024-10-08 18:23:30.860167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.832 [2024-10-08 18:23:30.860197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61090 cdw0:0 sqhd:c300 p:1 m:1 dnr:0 00:18:17.832 [2024-10-08 18:23:30.860237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.832 [2024-10-08 18:23:30.860269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61090 cdw0:0 sqhd:c300 p:1 m:1 dnr:0 00:18:17.832 [2024-10-08 18:23:30.860301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.832 [2024-10-08 18:23:30.860332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61090 cdw0:0 sqhd:c300 p:1 m:1 dnr:0 00:18:17.833 [2024-10-08 18:23:30.862750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:17.833 [2024-10-08 18:23:30.862791] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:18:17.833 [2024-10-08 18:23:30.862841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.833 [2024-10-08 18:23:30.862891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61090 cdw0:0 sqhd:c300 p:1 m:1 dnr:0 00:18:17.833 [2024-10-08 18:23:30.862925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.833 [2024-10-08 18:23:30.862956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61090 cdw0:0 sqhd:c300 p:1 m:1 dnr:0 00:18:17.833 [2024-10-08 18:23:30.862989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.833 [2024-10-08 18:23:30.863033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61090 cdw0:0 sqhd:c300 p:1 m:1 dnr:0 00:18:17.833 [2024-10-08 18:23:30.863066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.833 [2024-10-08 18:23:30.863098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61090 cdw0:0 sqhd:c300 p:1 m:1 dnr:0 00:18:17.833 [2024-10-08 18:23:30.865162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:17.833 [2024-10-08 18:23:30.865202] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:18:17.833 [2024-10-08 18:23:30.865248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.833 [2024-10-08 18:23:30.865280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61090 cdw0:0 sqhd:c300 p:1 m:1 dnr:0 00:18:17.833 [2024-10-08 18:23:30.865313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.833 [2024-10-08 18:23:30.865343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61090 cdw0:0 sqhd:c300 p:1 m:1 dnr:0 00:18:17.833 [2024-10-08 18:23:30.865375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.833 [2024-10-08 18:23:30.865406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61090 cdw0:0 sqhd:c300 p:1 m:1 dnr:0 00:18:17.833 [2024-10-08 18:23:30.865437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.833 [2024-10-08 18:23:30.865469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61090 cdw0:0 sqhd:c300 p:1 m:1 dnr:0 00:18:17.833 [2024-10-08 18:23:30.867850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:17.833 [2024-10-08 18:23:30.867892] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:18:17.833 [2024-10-08 18:23:30.867952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.833 [2024-10-08 18:23:30.867987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61090 cdw0:0 sqhd:c300 p:1 m:1 dnr:0 00:18:17.833 [2024-10-08 18:23:30.868031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.833 [2024-10-08 18:23:30.868061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61090 cdw0:0 sqhd:c300 p:1 m:1 dnr:0 00:18:17.833 [2024-10-08 18:23:30.868094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.833 [2024-10-08 18:23:30.868124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61090 cdw0:0 sqhd:c300 p:1 m:1 dnr:0 00:18:17.833 [2024-10-08 18:23:30.868155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.833 [2024-10-08 18:23:30.868185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61090 cdw0:0 sqhd:c300 p:1 m:1 dnr:0 00:18:17.833 [2024-10-08 18:23:30.870376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:17.833 [2024-10-08 18:23:30.870419] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:18:17.833 [2024-10-08 18:23:30.870468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.833 [2024-10-08 18:23:30.870500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61090 cdw0:0 sqhd:c300 p:1 m:1 dnr:0 00:18:17.833 [2024-10-08 18:23:30.870533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.833 [2024-10-08 18:23:30.870562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61090 cdw0:0 sqhd:c300 p:1 m:1 dnr:0 00:18:17.833 [2024-10-08 18:23:30.870595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.833 [2024-10-08 18:23:30.870625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61090 cdw0:0 sqhd:c300 p:1 m:1 dnr:0 00:18:17.833 [2024-10-08 18:23:30.870657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.833 [2024-10-08 18:23:30.870687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61090 cdw0:0 sqhd:c300 p:1 m:1 dnr:0 00:18:17.833 [2024-10-08 18:23:30.873057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:17.833 [2024-10-08 18:23:30.873098] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:18:17.833 [2024-10-08 18:23:30.873317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2dfd00 len:0x10000 key:0x184500 00:18:17.833 [2024-10-08 18:23:30.873354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.833 [2024-10-08 18:23:30.873400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2cfc80 len:0x10000 key:0x184500 00:18:17.833 [2024-10-08 18:23:30.873433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.833 [2024-10-08 18:23:30.873478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2bfc00 len:0x10000 key:0x184500 00:18:17.833 [2024-10-08 18:23:30.873519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.833 [2024-10-08 18:23:30.873563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2afb80 len:0x10000 key:0x184500 00:18:17.833 [2024-10-08 18:23:30.873596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.833 [2024-10-08 18:23:30.873639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b29fb00 len:0x10000 key:0x184500 00:18:17.833 [2024-10-08 18:23:30.873672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.833 [2024-10-08 18:23:30.873715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b28fa80 len:0x10000 key:0x184500 00:18:17.833 [2024-10-08 18:23:30.873748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.833 [2024-10-08 18:23:30.873792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b27fa00 len:0x10000 key:0x184500 00:18:17.833 [2024-10-08 18:23:30.873825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.833 [2024-10-08 18:23:30.873869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b26f980 len:0x10000 key:0x184500 00:18:17.833 [2024-10-08 18:23:30.873902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.833 [2024-10-08 18:23:30.873947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b25f900 len:0x10000 key:0x184500 00:18:17.833 [2024-10-08 18:23:30.873979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.833 [2024-10-08 18:23:30.874082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b24f880 len:0x10000 key:0x184500 00:18:17.833 [2024-10-08 18:23:30.874116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.833 [2024-10-08 18:23:30.874161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b23f800 len:0x10000 key:0x184500 00:18:17.833 [2024-10-08 18:23:30.874193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.833 [2024-10-08 18:23:30.874237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b22f780 len:0x10000 key:0x184500 00:18:17.833 [2024-10-08 18:23:30.874269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.833 [2024-10-08 18:23:30.874312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b21f700 len:0x10000 key:0x184500 00:18:17.833 [2024-10-08 18:23:30.874344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.833 [2024-10-08 18:23:30.874388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b20f680 len:0x10000 key:0x184500 00:18:17.833 [2024-10-08 18:23:30.874420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.833 [2024-10-08 18:23:30.874469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e5eb80 len:0x10000 key:0x183d00 00:18:17.833 [2024-10-08 18:23:30.874502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.833 [2024-10-08 18:23:30.874545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e4eb00 len:0x10000 key:0x183d00 00:18:17.833 [2024-10-08 18:23:30.874577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.833 [2024-10-08 18:23:30.874621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e3ea80 len:0x10000 key:0x183d00 00:18:17.833 [2024-10-08 18:23:30.874653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.833 [2024-10-08 18:23:30.874696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e2ea00 len:0x10000 key:0x183d00 00:18:17.833 [2024-10-08 18:23:30.874729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.833 [2024-10-08 18:23:30.874772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e1e980 len:0x10000 key:0x183d00 00:18:17.833 [2024-10-08 18:23:30.874804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.833 [2024-10-08 18:23:30.874848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e0e900 len:0x10000 key:0x183d00 00:18:17.833 [2024-10-08 18:23:30.874881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.834 [2024-10-08 18:23:30.874925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002c7740 len:0x10000 key:0x184000 00:18:17.834 [2024-10-08 18:23:30.874958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.834 [2024-10-08 18:23:30.875014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002b76c0 len:0x10000 key:0x184000 00:18:17.834 [2024-10-08 18:23:30.875048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.834 [2024-10-08 18:23:30.875092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002a7640 len:0x10000 key:0x184000 00:18:17.834 [2024-10-08 18:23:30.875123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.834 [2024-10-08 18:23:30.875167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002975c0 len:0x10000 key:0x184000 00:18:17.834 [2024-10-08 18:23:30.875199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.834 [2024-10-08 18:23:30.875242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000287540 len:0x10000 key:0x184000 00:18:17.834 [2024-10-08 18:23:30.875275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.834 [2024-10-08 18:23:30.875323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002774c0 len:0x10000 key:0x184000 00:18:17.834 [2024-10-08 18:23:30.875356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.834 [2024-10-08 18:23:30.875400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000267440 len:0x10000 key:0x184000 00:18:17.834 [2024-10-08 18:23:30.875433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.834 [2024-10-08 18:23:30.875476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002573c0 len:0x10000 key:0x184000 00:18:17.834 [2024-10-08 18:23:30.875509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.834 [2024-10-08 18:23:30.875554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000247340 len:0x10000 key:0x184000 00:18:17.834 [2024-10-08 18:23:30.875585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.834 [2024-10-08 18:23:30.875629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002372c0 len:0x10000 key:0x184000 00:18:17.834 [2024-10-08 18:23:30.875662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.834 [2024-10-08 18:23:30.875705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000227240 len:0x10000 key:0x184000 00:18:17.834 [2024-10-08 18:23:30.875737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.834 [2024-10-08 18:23:30.875781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002171c0 len:0x10000 key:0x184000 00:18:17.834 [2024-10-08 18:23:30.875813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.834 [2024-10-08 18:23:30.875857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011d0f000 len:0x10000 key:0x184500 00:18:17.834 [2024-10-08 18:23:30.875889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.834 [2024-10-08 18:23:30.875935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011cee000 len:0x10000 key:0x184500 00:18:17.834 [2024-10-08 18:23:30.875968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.834 [2024-10-08 18:23:30.876023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011ccd000 len:0x10000 key:0x184500 00:18:17.834 [2024-10-08 18:23:30.876056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.834 [2024-10-08 18:23:30.876100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011cac000 len:0x10000 key:0x184500 00:18:17.834 [2024-10-08 18:23:30.876133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.834 [2024-10-08 18:23:30.876179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011c8b000 len:0x10000 key:0x184500 00:18:17.834 [2024-10-08 18:23:30.876223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.834 [2024-10-08 18:23:30.876268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011c6a000 len:0x10000 key:0x184500 00:18:17.834 [2024-10-08 18:23:30.876301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.834 [2024-10-08 18:23:30.876345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011c49000 len:0x10000 key:0x184500 00:18:17.834 [2024-10-08 18:23:30.876378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.834 [2024-10-08 18:23:30.876422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011c28000 len:0x10000 key:0x184500 00:18:17.834 [2024-10-08 18:23:30.876455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.834 [2024-10-08 18:23:30.876502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011c07000 len:0x10000 key:0x184500 00:18:17.834 [2024-10-08 18:23:30.876534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.834 [2024-10-08 18:23:30.876578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011be6000 len:0x10000 key:0x184500 00:18:17.834 [2024-10-08 18:23:30.876611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.834 [2024-10-08 18:23:30.876655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011bc5000 len:0x10000 key:0x184500 00:18:17.834 [2024-10-08 18:23:30.876687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.834 [2024-10-08 18:23:30.876732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011ba4000 len:0x10000 key:0x184500 00:18:17.834 [2024-10-08 18:23:30.876765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.834 [2024-10-08 18:23:30.876809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011b83000 len:0x10000 key:0x184500 00:18:17.834 [2024-10-08 18:23:30.876841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.834 [2024-10-08 18:23:30.876886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011b62000 len:0x10000 key:0x184500 00:18:17.834 [2024-10-08 18:23:30.876918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.834 [2024-10-08 18:23:30.876962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011b41000 len:0x10000 key:0x184500 00:18:17.834 [2024-10-08 18:23:30.876995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.834 [2024-10-08 18:23:30.877080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011b20000 len:0x10000 key:0x184500 00:18:17.834 [2024-10-08 18:23:30.877134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.834 [2024-10-08 18:23:30.877183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010956000 len:0x10000 key:0x184500 00:18:17.834 [2024-10-08 18:23:30.877216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.834 [2024-10-08 18:23:30.877260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010977000 len:0x10000 key:0x184500 00:18:17.834 [2024-10-08 18:23:30.877292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.834 [2024-10-08 18:23:30.877339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010998000 len:0x10000 key:0x184500 00:18:17.834 [2024-10-08 18:23:30.877371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.834 [2024-10-08 18:23:30.877416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000109b9000 len:0x10000 key:0x184500 00:18:17.834 [2024-10-08 18:23:30.877448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.834 [2024-10-08 18:23:30.877492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000109da000 len:0x10000 key:0x184500 00:18:17.835 [2024-10-08 18:23:30.877524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.835 [2024-10-08 18:23:30.877569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000109fb000 len:0x10000 key:0x184500 00:18:17.835 [2024-10-08 18:23:30.877602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.835 [2024-10-08 18:23:30.877647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010a1c000 len:0x10000 key:0x184500 00:18:17.835 [2024-10-08 18:23:30.877679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.835 [2024-10-08 18:23:30.877723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012c03000 len:0x10000 key:0x184500 00:18:17.835 [2024-10-08 18:23:30.877755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.835 [2024-10-08 18:23:30.877800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012c24000 len:0x10000 key:0x184500 00:18:17.835 [2024-10-08 18:23:30.877831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.835 [2024-10-08 18:23:30.877876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012c45000 len:0x10000 key:0x184500 00:18:17.835 [2024-10-08 18:23:30.877909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.835 [2024-10-08 18:23:30.877953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000108d2000 len:0x10000 key:0x184500 00:18:17.835 [2024-10-08 18:23:30.877984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.835 [2024-10-08 18:23:30.878045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000108b1000 len:0x10000 key:0x184500 00:18:17.835 [2024-10-08 18:23:30.878078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.835 [2024-10-08 18:23:30.878122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010890000 len:0x10000 key:0x184500 00:18:17.835 [2024-10-08 18:23:30.878154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.835 [2024-10-08 18:23:30.878198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba0f000 len:0x10000 key:0x184500 00:18:17.835 [2024-10-08 18:23:30.878231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.835 [2024-10-08 18:23:30.878275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b9ee000 len:0x10000 key:0x184500 00:18:17.835 [2024-10-08 18:23:30.878307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.835 [2024-10-08 18:23:30.878352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b9cd000 len:0x10000 key:0x184500 00:18:17.835 [2024-10-08 18:23:30.878385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.835 [2024-10-08 18:23:30.882803] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200013802900 was disconnected and freed. reset controller. 00:18:17.835 [2024-10-08 18:23:30.882857] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:17.835 [2024-10-08 18:23:30.882900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000008ef180 len:0x10000 key:0x184400 00:18:17.835 [2024-10-08 18:23:30.882933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.835 [2024-10-08 18:23:30.882982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000008df100 len:0x10000 key:0x184400 00:18:17.835 [2024-10-08 18:23:30.883028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.835 [2024-10-08 18:23:30.883070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000008cf080 len:0x10000 key:0x184400 00:18:17.835 [2024-10-08 18:23:30.883116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.835 [2024-10-08 18:23:30.883153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000008bf000 len:0x10000 key:0x184400 00:18:17.835 [2024-10-08 18:23:30.883180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.835 [2024-10-08 18:23:30.883216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000008aef80 len:0x10000 key:0x184400 00:18:17.835 [2024-10-08 18:23:30.883244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.835 [2024-10-08 18:23:30.883280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000089ef00 len:0x10000 key:0x184400 00:18:17.835 [2024-10-08 18:23:30.883313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.835 [2024-10-08 18:23:30.883350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000088ee80 len:0x10000 key:0x184400 00:18:17.835 [2024-10-08 18:23:30.883378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.835 [2024-10-08 18:23:30.883414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000087ee00 len:0x10000 key:0x184400 00:18:17.835 [2024-10-08 18:23:30.883441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.835 [2024-10-08 18:23:30.883477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000086ed80 len:0x10000 key:0x184400 00:18:17.835 [2024-10-08 18:23:30.883504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.835 [2024-10-08 18:23:30.883540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000085ed00 len:0x10000 key:0x184400 00:18:17.835 [2024-10-08 18:23:30.883567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.835 [2024-10-08 18:23:30.883603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000084ec80 len:0x10000 key:0x184400 00:18:17.835 [2024-10-08 18:23:30.883630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.835 [2024-10-08 18:23:30.883667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000083ec00 len:0x10000 key:0x184400 00:18:17.835 [2024-10-08 18:23:30.883695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.835 [2024-10-08 18:23:30.883731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000082eb80 len:0x10000 key:0x184400 00:18:17.835 [2024-10-08 18:23:30.883758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.835 [2024-10-08 18:23:30.883794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000081eb00 len:0x10000 key:0x184400 00:18:17.835 [2024-10-08 18:23:30.883821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.835 [2024-10-08 18:23:30.883857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000080ea80 len:0x10000 key:0x184400 00:18:17.835 [2024-10-08 18:23:30.883884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.835 [2024-10-08 18:23:30.883920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005efe00 len:0x10000 key:0x183c00 00:18:17.835 [2024-10-08 18:23:30.883947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.835 [2024-10-08 18:23:30.883983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005dfd80 len:0x10000 key:0x183c00 00:18:17.835 [2024-10-08 18:23:30.884022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.835 [2024-10-08 18:23:30.884059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005cfd00 len:0x10000 key:0x183c00 00:18:17.835 [2024-10-08 18:23:30.884085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.835 [2024-10-08 18:23:30.884122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005bfc80 len:0x10000 key:0x183c00 00:18:17.835 [2024-10-08 18:23:30.884150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.835 [2024-10-08 18:23:30.884186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005afc00 len:0x10000 key:0x183c00 00:18:17.835 [2024-10-08 18:23:30.884213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.835 [2024-10-08 18:23:30.884249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000059fb80 len:0x10000 key:0x183c00 00:18:17.835 [2024-10-08 18:23:30.884277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.835 [2024-10-08 18:23:30.884313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000058fb00 len:0x10000 key:0x183c00 00:18:17.835 [2024-10-08 18:23:30.884341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.835 [2024-10-08 18:23:30.884378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000057fa80 len:0x10000 key:0x183c00 00:18:17.835 [2024-10-08 18:23:30.884405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.835 [2024-10-08 18:23:30.884441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000056fa00 len:0x10000 key:0x183c00 00:18:17.835 [2024-10-08 18:23:30.884468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.836 [2024-10-08 18:23:30.884505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000055f980 len:0x10000 key:0x183c00 00:18:17.836 [2024-10-08 18:23:30.884532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.836 [2024-10-08 18:23:30.884568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000054f900 len:0x10000 key:0x183c00 00:18:17.836 [2024-10-08 18:23:30.884596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.836 [2024-10-08 18:23:30.884632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000053f880 len:0x10000 key:0x183c00 00:18:17.836 [2024-10-08 18:23:30.884658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.836 [2024-10-08 18:23:30.884694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000052f800 len:0x10000 key:0x183c00 00:18:17.836 [2024-10-08 18:23:30.884721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.836 [2024-10-08 18:23:30.884762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000051f780 len:0x10000 key:0x183c00 00:18:17.836 [2024-10-08 18:23:30.884788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.836 [2024-10-08 18:23:30.884824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000050f700 len:0x10000 key:0x183c00 00:18:17.836 [2024-10-08 18:23:30.884851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.836 [2024-10-08 18:23:30.884887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004ff680 len:0x10000 key:0x183c00 00:18:17.836 [2024-10-08 18:23:30.884914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.836 [2024-10-08 18:23:30.884950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004ef600 len:0x10000 key:0x183c00 00:18:17.836 [2024-10-08 18:23:30.884977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.836 [2024-10-08 18:23:30.885024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004df580 len:0x10000 key:0x183c00 00:18:17.836 [2024-10-08 18:23:30.885053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.836 [2024-10-08 18:23:30.885089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c798000 len:0x10000 key:0x184500 00:18:17.836 [2024-10-08 18:23:30.885116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.836 [2024-10-08 18:23:30.885154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dff5000 len:0x10000 key:0x184500 00:18:17.836 [2024-10-08 18:23:30.885182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.836 [2024-10-08 18:23:30.885220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dfd4000 len:0x10000 key:0x184500 00:18:17.836 [2024-10-08 18:23:30.885248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.836 [2024-10-08 18:23:30.885285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dfb3000 len:0x10000 key:0x184500 00:18:17.836 [2024-10-08 18:23:30.885313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.836 [2024-10-08 18:23:30.885350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df92000 len:0x10000 key:0x184500 00:18:17.836 [2024-10-08 18:23:30.885378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.836 [2024-10-08 18:23:30.885416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df71000 len:0x10000 key:0x184500 00:18:17.836 [2024-10-08 18:23:30.885443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.836 [2024-10-08 18:23:30.885485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df50000 len:0x10000 key:0x184500 00:18:17.836 [2024-10-08 18:23:30.885513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.836 [2024-10-08 18:23:30.885550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001296f000 len:0x10000 key:0x184500 00:18:17.836 [2024-10-08 18:23:30.885577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.836 [2024-10-08 18:23:30.885615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001294e000 len:0x10000 key:0x184500 00:18:17.836 [2024-10-08 18:23:30.885642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.836 [2024-10-08 18:23:30.885680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001292d000 len:0x10000 key:0x184500 00:18:17.836 [2024-10-08 18:23:30.885707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.836 [2024-10-08 18:23:30.885744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001290c000 len:0x10000 key:0x184500 00:18:17.836 [2024-10-08 18:23:30.885772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.836 [2024-10-08 18:23:30.885810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000128eb000 len:0x10000 key:0x184500 00:18:17.836 [2024-10-08 18:23:30.885836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.836 [2024-10-08 18:23:30.885873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000128ca000 len:0x10000 key:0x184500 00:18:17.836 [2024-10-08 18:23:30.885900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.836 [2024-10-08 18:23:30.885938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000128a9000 len:0x10000 key:0x184500 00:18:17.836 [2024-10-08 18:23:30.885965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.836 [2024-10-08 18:23:30.886038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012888000 len:0x10000 key:0x184500 00:18:17.836 [2024-10-08 18:23:30.886068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.836 [2024-10-08 18:23:30.886105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012867000 len:0x10000 key:0x184500 00:18:17.836 [2024-10-08 18:23:30.886132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.836 [2024-10-08 18:23:30.886170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b9ac000 len:0x10000 key:0x184500 00:18:17.836 [2024-10-08 18:23:30.886198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.836 [2024-10-08 18:23:30.886235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b98b000 len:0x10000 key:0x184500 00:18:17.836 [2024-10-08 18:23:30.886266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.836 [2024-10-08 18:23:30.886304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b96a000 len:0x10000 key:0x184500 00:18:17.836 [2024-10-08 18:23:30.886332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.836 [2024-10-08 18:23:30.886369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b949000 len:0x10000 key:0x184500 00:18:17.836 [2024-10-08 18:23:30.886397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.836 [2024-10-08 18:23:30.886435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b928000 len:0x10000 key:0x184500 00:18:17.836 [2024-10-08 18:23:30.886462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.836 [2024-10-08 18:23:30.886501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b907000 len:0x10000 key:0x184500 00:18:17.836 [2024-10-08 18:23:30.886528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.836 [2024-10-08 18:23:30.886566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b8e6000 len:0x10000 key:0x184500 00:18:17.836 [2024-10-08 18:23:30.886593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.836 [2024-10-08 18:23:30.886630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b8c5000 len:0x10000 key:0x184500 00:18:17.836 [2024-10-08 18:23:30.886657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.836 [2024-10-08 18:23:30.886694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b8a4000 len:0x10000 key:0x184500 00:18:17.836 [2024-10-08 18:23:30.886722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.836 [2024-10-08 18:23:30.886759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b883000 len:0x10000 key:0x184500 00:18:17.836 [2024-10-08 18:23:30.886787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.836 [2024-10-08 18:23:30.886824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b862000 len:0x10000 key:0x184500 00:18:17.836 [2024-10-08 18:23:30.886852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.836 [2024-10-08 18:23:30.886889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b841000 len:0x10000 key:0x184500 00:18:17.836 [2024-10-08 18:23:30.886916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.836 [2024-10-08 18:23:30.886954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b820000 len:0x10000 key:0x184500 00:18:17.836 [2024-10-08 18:23:30.886985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.837 [2024-10-08 18:23:30.887033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011a5a000 len:0x10000 key:0x184500 00:18:17.837 [2024-10-08 18:23:30.887060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.837 [2024-10-08 18:23:30.887097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011a7b000 len:0x10000 key:0x184500 00:18:17.837 [2024-10-08 18:23:30.887125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.837 [2024-10-08 18:23:30.891171] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200013802640 was disconnected and freed. reset controller. 00:18:17.837 [2024-10-08 18:23:30.891218] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:17.837 [2024-10-08 18:23:30.891259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e6fa00 len:0x10000 key:0x182c00 00:18:17.837 [2024-10-08 18:23:30.891293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.837 [2024-10-08 18:23:30.891341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e5f980 len:0x10000 key:0x182c00 00:18:17.837 [2024-10-08 18:23:30.891373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.837 [2024-10-08 18:23:30.891416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e4f900 len:0x10000 key:0x182c00 00:18:17.837 [2024-10-08 18:23:30.891449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.837 [2024-10-08 18:23:30.891491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e3f880 len:0x10000 key:0x182c00 00:18:17.837 [2024-10-08 18:23:30.891523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.837 [2024-10-08 18:23:30.891567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e2f800 len:0x10000 key:0x182c00 00:18:17.837 [2024-10-08 18:23:30.891618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.837 [2024-10-08 18:23:30.891675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e1f780 len:0x10000 key:0x182c00 00:18:17.837 [2024-10-08 18:23:30.891707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.837 [2024-10-08 18:23:30.891750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e0f700 len:0x10000 key:0x182c00 00:18:17.837 [2024-10-08 18:23:30.891783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.837 [2024-10-08 18:23:30.891826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cdf780 len:0x10000 key:0x182b00 00:18:17.837 [2024-10-08 18:23:30.891858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.837 [2024-10-08 18:23:30.891908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ccf700 len:0x10000 key:0x182b00 00:18:17.837 [2024-10-08 18:23:30.891941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.837 [2024-10-08 18:23:30.891985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cbf680 len:0x10000 key:0x182b00 00:18:17.837 [2024-10-08 18:23:30.892030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.837 [2024-10-08 18:23:30.892084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019caf600 len:0x10000 key:0x182b00 00:18:17.837 [2024-10-08 18:23:30.892114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.837 [2024-10-08 18:23:30.892153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c9f580 len:0x10000 key:0x182b00 00:18:17.837 [2024-10-08 18:23:30.892182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.837 [2024-10-08 18:23:30.892222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c8f500 len:0x10000 key:0x182b00 00:18:17.837 [2024-10-08 18:23:30.892251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.837 [2024-10-08 18:23:30.892290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c7f480 len:0x10000 key:0x182b00 00:18:17.837 [2024-10-08 18:23:30.892320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.837 [2024-10-08 18:23:30.892360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c6f400 len:0x10000 key:0x182b00 00:18:17.837 [2024-10-08 18:23:30.892389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.837 [2024-10-08 18:23:30.892428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c5f380 len:0x10000 key:0x182b00 00:18:17.837 [2024-10-08 18:23:30.892457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.837 [2024-10-08 18:23:30.892498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c4f300 len:0x10000 key:0x182b00 00:18:17.837 [2024-10-08 18:23:30.892527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.837 [2024-10-08 18:23:30.892566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c3f280 len:0x10000 key:0x182b00 00:18:17.837 [2024-10-08 18:23:30.892596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.837 [2024-10-08 18:23:30.892636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c2f200 len:0x10000 key:0x182b00 00:18:17.837 [2024-10-08 18:23:30.892665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.837 [2024-10-08 18:23:30.892704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c1f180 len:0x10000 key:0x182b00 00:18:17.837 [2024-10-08 18:23:30.892739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.837 [2024-10-08 18:23:30.892779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c0f100 len:0x10000 key:0x182b00 00:18:17.837 [2024-10-08 18:23:30.892809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.837 [2024-10-08 18:23:30.892848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a1f0000 len:0x10000 key:0x182d00 00:18:17.837 [2024-10-08 18:23:30.892878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.837 [2024-10-08 18:23:30.892918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a1dff80 len:0x10000 key:0x182d00 00:18:17.837 [2024-10-08 18:23:30.892947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.837 [2024-10-08 18:23:30.892987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a1cff00 len:0x10000 key:0x182d00 00:18:17.837 [2024-10-08 18:23:30.893028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.837 [2024-10-08 18:23:30.893068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a1bfe80 len:0x10000 key:0x182d00 00:18:17.837 [2024-10-08 18:23:30.893098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.837 [2024-10-08 18:23:30.893152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a1afe00 len:0x10000 key:0x182d00 00:18:17.837 [2024-10-08 18:23:30.893182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.837 [2024-10-08 18:23:30.893224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a19fd80 len:0x10000 key:0x182d00 00:18:17.837 [2024-10-08 18:23:30.893254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.837 [2024-10-08 18:23:30.893293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a18fd00 len:0x10000 key:0x182d00 00:18:17.837 [2024-10-08 18:23:30.893323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.837 [2024-10-08 18:23:30.893363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a17fc80 len:0x10000 key:0x182d00 00:18:17.837 [2024-10-08 18:23:30.893393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.837 [2024-10-08 18:23:30.893432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a16fc00 len:0x10000 key:0x182d00 00:18:17.837 [2024-10-08 18:23:30.893462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.837 [2024-10-08 18:23:30.893501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a15fb80 len:0x10000 key:0x182d00 00:18:17.837 [2024-10-08 18:23:30.893535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.837 [2024-10-08 18:23:30.893575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a14fb00 len:0x10000 key:0x182d00 00:18:17.837 [2024-10-08 18:23:30.893604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.837 [2024-10-08 18:23:30.893644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a13fa80 len:0x10000 key:0x182d00 00:18:17.837 [2024-10-08 18:23:30.893674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.837 [2024-10-08 18:23:30.893713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000caf2000 len:0x10000 key:0x184500 00:18:17.837 [2024-10-08 18:23:30.893743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.837 [2024-10-08 18:23:30.893786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012b1c000 len:0x10000 key:0x184500 00:18:17.837 [2024-10-08 18:23:30.893815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.837 [2024-10-08 18:23:30.893865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012afb000 len:0x10000 key:0x184500 00:18:17.838 [2024-10-08 18:23:30.893896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.838 [2024-10-08 18:23:30.893938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012ada000 len:0x10000 key:0x184500 00:18:17.838 [2024-10-08 18:23:30.893968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.838 [2024-10-08 18:23:30.894021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012ab9000 len:0x10000 key:0x184500 00:18:17.838 [2024-10-08 18:23:30.894053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.838 [2024-10-08 18:23:30.894094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012a98000 len:0x10000 key:0x184500 00:18:17.838 [2024-10-08 18:23:30.894125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.838 [2024-10-08 18:23:30.894166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012a77000 len:0x10000 key:0x184500 00:18:17.838 [2024-10-08 18:23:30.894197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.838 [2024-10-08 18:23:30.894238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012a56000 len:0x10000 key:0x184500 00:18:17.838 [2024-10-08 18:23:30.894269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.838 [2024-10-08 18:23:30.894309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012a35000 len:0x10000 key:0x184500 00:18:17.838 [2024-10-08 18:23:30.894345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.838 [2024-10-08 18:23:30.894387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012a14000 len:0x10000 key:0x184500 00:18:17.838 [2024-10-08 18:23:30.894416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.838 [2024-10-08 18:23:30.894457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000129f3000 len:0x10000 key:0x184500 00:18:17.838 [2024-10-08 18:23:30.894487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.838 [2024-10-08 18:23:30.894529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000129d2000 len:0x10000 key:0x184500 00:18:17.838 [2024-10-08 18:23:30.894559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.838 [2024-10-08 18:23:30.894600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000129b1000 len:0x10000 key:0x184500 00:18:17.838 [2024-10-08 18:23:30.894630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.838 [2024-10-08 18:23:30.894671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012990000 len:0x10000 key:0x184500 00:18:17.838 [2024-10-08 18:23:30.894701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.838 [2024-10-08 18:23:30.894742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d8f000 len:0x10000 key:0x184500 00:18:17.838 [2024-10-08 18:23:30.894773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.838 [2024-10-08 18:23:30.894815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d6e000 len:0x10000 key:0x184500 00:18:17.838 [2024-10-08 18:23:30.894845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.838 [2024-10-08 18:23:30.894886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011abd000 len:0x10000 key:0x184500 00:18:17.838 [2024-10-08 18:23:30.894916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.838 [2024-10-08 18:23:30.894958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011ade000 len:0x10000 key:0x184500 00:18:17.838 [2024-10-08 18:23:30.894988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.838 [2024-10-08 18:23:30.895043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011aff000 len:0x10000 key:0x184500 00:18:17.838 [2024-10-08 18:23:30.895073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.838 [2024-10-08 18:23:30.895115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f810000 len:0x10000 key:0x184500 00:18:17.838 [2024-10-08 18:23:30.895144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.838 [2024-10-08 18:23:30.895190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc1f000 len:0x10000 key:0x184500 00:18:17.838 [2024-10-08 18:23:30.895220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.838 [2024-10-08 18:23:30.895262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bbfe000 len:0x10000 key:0x184500 00:18:17.838 [2024-10-08 18:23:30.895292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.838 [2024-10-08 18:23:30.895333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bbdd000 len:0x10000 key:0x184500 00:18:17.838 [2024-10-08 18:23:30.895363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.838 [2024-10-08 18:23:30.895405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bbbc000 len:0x10000 key:0x184500 00:18:17.838 [2024-10-08 18:23:30.895434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.838 [2024-10-08 18:23:30.895475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb9b000 len:0x10000 key:0x184500 00:18:17.838 [2024-10-08 18:23:30.895504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.838 [2024-10-08 18:23:30.895547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb7a000 len:0x10000 key:0x184500 00:18:17.838 [2024-10-08 18:23:30.895577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.838 [2024-10-08 18:23:30.895618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb59000 len:0x10000 key:0x184500 00:18:17.838 [2024-10-08 18:23:30.895649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.838 [2024-10-08 18:23:30.895689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb38000 len:0x10000 key:0x184500 00:18:17.838 [2024-10-08 18:23:30.895719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.838 [2024-10-08 18:23:30.895760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb17000 len:0x10000 key:0x184500 00:18:17.838 [2024-10-08 18:23:30.895790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.838 [2024-10-08 18:23:30.895831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000baf6000 len:0x10000 key:0x184500 00:18:17.838 [2024-10-08 18:23:30.895863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.838 [2024-10-08 18:23:30.895904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bad5000 len:0x10000 key:0x184500 00:18:17.838 [2024-10-08 18:23:30.895935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.838 [2024-10-08 18:23:30.900155] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200013802380 was disconnected and freed. reset controller. 00:18:17.838 [2024-10-08 18:23:30.900209] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:17.838 [2024-10-08 18:23:30.900251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a29fb80 len:0x10000 key:0x182e00 00:18:17.838 [2024-10-08 18:23:30.900285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.838 [2024-10-08 18:23:30.900333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a28fb00 len:0x10000 key:0x182e00 00:18:17.838 [2024-10-08 18:23:30.900366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.838 [2024-10-08 18:23:30.900408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a27fa80 len:0x10000 key:0x182e00 00:18:17.838 [2024-10-08 18:23:30.900441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.838 [2024-10-08 18:23:30.900484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a26fa00 len:0x10000 key:0x182e00 00:18:17.838 [2024-10-08 18:23:30.900516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.838 [2024-10-08 18:23:30.900560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a25f980 len:0x10000 key:0x182e00 00:18:17.838 [2024-10-08 18:23:30.900593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.838 [2024-10-08 18:23:30.900636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a24f900 len:0x10000 key:0x182e00 00:18:17.838 [2024-10-08 18:23:30.900669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.838 [2024-10-08 18:23:30.900713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a23f880 len:0x10000 key:0x182e00 00:18:17.838 [2024-10-08 18:23:30.900745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.839 [2024-10-08 18:23:30.900788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a22f800 len:0x10000 key:0x182e00 00:18:17.839 [2024-10-08 18:23:30.900820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.839 [2024-10-08 18:23:30.900864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a21f780 len:0x10000 key:0x182e00 00:18:17.839 [2024-10-08 18:23:30.900898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.839 [2024-10-08 18:23:30.900941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a20f700 len:0x10000 key:0x182e00 00:18:17.839 [2024-10-08 18:23:30.900973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.839 [2024-10-08 18:23:30.901028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5f0000 len:0x10000 key:0x182f00 00:18:17.839 [2024-10-08 18:23:30.901074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.839 [2024-10-08 18:23:30.901130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5dff80 len:0x10000 key:0x182f00 00:18:17.839 [2024-10-08 18:23:30.901158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.839 [2024-10-08 18:23:30.901195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5cff00 len:0x10000 key:0x182f00 00:18:17.839 [2024-10-08 18:23:30.901223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.839 [2024-10-08 18:23:30.901259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5bfe80 len:0x10000 key:0x182f00 00:18:17.839 [2024-10-08 18:23:30.901287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.839 [2024-10-08 18:23:30.901323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5afe00 len:0x10000 key:0x182f00 00:18:17.839 [2024-10-08 18:23:30.901351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.839 [2024-10-08 18:23:30.901388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a59fd80 len:0x10000 key:0x182f00 00:18:17.839 [2024-10-08 18:23:30.901416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.839 [2024-10-08 18:23:30.901453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011f1f000 len:0x10000 key:0x184500 00:18:17.839 [2024-10-08 18:23:30.901481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.839 [2024-10-08 18:23:30.901520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011efe000 len:0x10000 key:0x184500 00:18:17.839 [2024-10-08 18:23:30.901548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.839 [2024-10-08 18:23:30.901586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011edd000 len:0x10000 key:0x184500 00:18:17.839 [2024-10-08 18:23:30.901614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.839 [2024-10-08 18:23:30.901652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011ebc000 len:0x10000 key:0x184500 00:18:17.839 [2024-10-08 18:23:30.901679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.839 [2024-10-08 18:23:30.901718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011e9b000 len:0x10000 key:0x184500 00:18:17.839 [2024-10-08 18:23:30.901745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.839 [2024-10-08 18:23:30.901783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011e7a000 len:0x10000 key:0x184500 00:18:17.839 [2024-10-08 18:23:30.901811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.839 [2024-10-08 18:23:30.901853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011e59000 len:0x10000 key:0x184500 00:18:17.839 [2024-10-08 18:23:30.901881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.839 [2024-10-08 18:23:30.901919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011e38000 len:0x10000 key:0x184500 00:18:17.839 [2024-10-08 18:23:30.901946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.839 [2024-10-08 18:23:30.901984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011e17000 len:0x10000 key:0x184500 00:18:17.839 [2024-10-08 18:23:30.902022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.839 [2024-10-08 18:23:30.902059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011df6000 len:0x10000 key:0x184500 00:18:17.839 [2024-10-08 18:23:30.902088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.839 [2024-10-08 18:23:30.902126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011dd5000 len:0x10000 key:0x184500 00:18:17.839 [2024-10-08 18:23:30.902154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.839 [2024-10-08 18:23:30.902192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011db4000 len:0x10000 key:0x184500 00:18:17.839 [2024-10-08 18:23:30.902220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.839 [2024-10-08 18:23:30.902258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011d93000 len:0x10000 key:0x184500 00:18:17.839 [2024-10-08 18:23:30.902286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.839 [2024-10-08 18:23:30.902324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f075000 len:0x10000 key:0x184500 00:18:17.839 [2024-10-08 18:23:30.902352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.839 [2024-10-08 18:23:30.902390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f054000 len:0x10000 key:0x184500 00:18:17.839 [2024-10-08 18:23:30.902418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.839 [2024-10-08 18:23:30.902456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f033000 len:0x10000 key:0x184500 00:18:17.839 [2024-10-08 18:23:30.902484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.839 [2024-10-08 18:23:30.902522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001080c000 len:0x10000 key:0x184500 00:18:17.839 [2024-10-08 18:23:30.902549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.839 [2024-10-08 18:23:30.902592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001082d000 len:0x10000 key:0x184500 00:18:17.839 [2024-10-08 18:23:30.902620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.839 [2024-10-08 18:23:30.902659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012846000 len:0x10000 key:0x184500 00:18:17.839 [2024-10-08 18:23:30.902687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.839 [2024-10-08 18:23:30.902725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012825000 len:0x10000 key:0x184500 00:18:17.839 [2024-10-08 18:23:30.902753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.839 [2024-10-08 18:23:30.902792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012804000 len:0x10000 key:0x184500 00:18:17.839 [2024-10-08 18:23:30.902820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.839 [2024-10-08 18:23:30.902858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000127e3000 len:0x10000 key:0x184500 00:18:17.839 [2024-10-08 18:23:30.902886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.839 [2024-10-08 18:23:30.902924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000127c2000 len:0x10000 key:0x184500 00:18:17.839 [2024-10-08 18:23:30.902952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.839 [2024-10-08 18:23:30.902990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000127a1000 len:0x10000 key:0x184500 00:18:17.839 [2024-10-08 18:23:30.903030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.839 [2024-10-08 18:23:30.903068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012780000 len:0x10000 key:0x184500 00:18:17.840 [2024-10-08 18:23:30.903096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.840 [2024-10-08 18:23:30.903133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012636000 len:0x10000 key:0x184500 00:18:17.840 [2024-10-08 18:23:30.903162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.840 [2024-10-08 18:23:30.903199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012615000 len:0x10000 key:0x184500 00:18:17.840 [2024-10-08 18:23:30.903227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.840 [2024-10-08 18:23:30.903264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000125f4000 len:0x10000 key:0x184500 00:18:17.840 [2024-10-08 18:23:30.903292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.840 [2024-10-08 18:23:30.903330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000125d3000 len:0x10000 key:0x184500 00:18:17.840 [2024-10-08 18:23:30.903362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.840 [2024-10-08 18:23:30.903416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000125b2000 len:0x10000 key:0x184500 00:18:17.840 [2024-10-08 18:23:30.903445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.840 [2024-10-08 18:23:30.903482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012591000 len:0x10000 key:0x184500 00:18:17.840 [2024-10-08 18:23:30.903510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.840 [2024-10-08 18:23:30.903547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012570000 len:0x10000 key:0x184500 00:18:17.840 [2024-10-08 18:23:30.903577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.840 [2024-10-08 18:23:30.903622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c03f000 len:0x10000 key:0x184500 00:18:17.840 [2024-10-08 18:23:30.903650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.840 [2024-10-08 18:23:30.903688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c01e000 len:0x10000 key:0x184500 00:18:17.840 [2024-10-08 18:23:30.903716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.840 [2024-10-08 18:23:30.903753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bffd000 len:0x10000 key:0x184500 00:18:17.840 [2024-10-08 18:23:30.903780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.840 [2024-10-08 18:23:30.903818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bfdc000 len:0x10000 key:0x184500 00:18:17.840 [2024-10-08 18:23:30.903846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.840 [2024-10-08 18:23:30.903883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bfbb000 len:0x10000 key:0x184500 00:18:17.840 [2024-10-08 18:23:30.903911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.840 [2024-10-08 18:23:30.903948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf9a000 len:0x10000 key:0x184500 00:18:17.840 [2024-10-08 18:23:30.903976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.840 [2024-10-08 18:23:30.904025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf79000 len:0x10000 key:0x184500 00:18:17.840 [2024-10-08 18:23:30.904054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.840 [2024-10-08 18:23:30.904092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf58000 len:0x10000 key:0x184500 00:18:17.840 [2024-10-08 18:23:30.904123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.840 [2024-10-08 18:23:30.904161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf37000 len:0x10000 key:0x184500 00:18:17.840 [2024-10-08 18:23:30.904189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.840 [2024-10-08 18:23:30.904226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c24f000 len:0x10000 key:0x184500 00:18:17.840 [2024-10-08 18:23:30.904254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.840 [2024-10-08 18:23:30.904292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c22e000 len:0x10000 key:0x184500 00:18:17.840 [2024-10-08 18:23:30.904320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.840 [2024-10-08 18:23:30.904359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c20d000 len:0x10000 key:0x184500 00:18:17.840 [2024-10-08 18:23:30.904387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.840 [2024-10-08 18:23:30.904425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c1ec000 len:0x10000 key:0x184500 00:18:17.840 [2024-10-08 18:23:30.904452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.840 [2024-10-08 18:23:30.904489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c1cb000 len:0x10000 key:0x184500 00:18:17.840 [2024-10-08 18:23:30.904517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.840 [2024-10-08 18:23:30.904556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c1aa000 len:0x10000 key:0x184500 00:18:17.840 [2024-10-08 18:23:30.904584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.840 [2024-10-08 18:23:30.904622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c189000 len:0x10000 key:0x184500 00:18:17.840 [2024-10-08 18:23:30.904650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.840 [2024-10-08 18:23:30.908462] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000138020c0 was disconnected and freed. reset controller. 00:18:17.840 [2024-10-08 18:23:30.908509] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:17.840 [2024-10-08 18:23:30.908551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8dfd80 len:0x10000 key:0x183100 00:18:17.840 [2024-10-08 18:23:30.908583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.840 [2024-10-08 18:23:30.908630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8cfd00 len:0x10000 key:0x183100 00:18:17.840 [2024-10-08 18:23:30.908662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.840 [2024-10-08 18:23:30.908712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8bfc80 len:0x10000 key:0x183100 00:18:17.840 [2024-10-08 18:23:30.908745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.840 [2024-10-08 18:23:30.908797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8afc00 len:0x10000 key:0x183100 00:18:17.840 [2024-10-08 18:23:30.908826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.840 [2024-10-08 18:23:30.908861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a89fb80 len:0x10000 key:0x183100 00:18:17.840 [2024-10-08 18:23:30.908889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.840 [2024-10-08 18:23:30.908925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a88fb00 len:0x10000 key:0x183100 00:18:17.840 [2024-10-08 18:23:30.908960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.840 [2024-10-08 18:23:30.908997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a87fa80 len:0x10000 key:0x183100 00:18:17.840 [2024-10-08 18:23:30.909035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.840 [2024-10-08 18:23:30.909072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a86fa00 len:0x10000 key:0x183100 00:18:17.840 [2024-10-08 18:23:30.909099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.840 [2024-10-08 18:23:30.909135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a85f980 len:0x10000 key:0x183100 00:18:17.840 [2024-10-08 18:23:30.909162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.840 [2024-10-08 18:23:30.909198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a84f900 len:0x10000 key:0x183100 00:18:17.840 [2024-10-08 18:23:30.909226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.840 [2024-10-08 18:23:30.909262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a83f880 len:0x10000 key:0x183100 00:18:17.840 [2024-10-08 18:23:30.909289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.840 [2024-10-08 18:23:30.909326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a82f800 len:0x10000 key:0x183100 00:18:17.840 [2024-10-08 18:23:30.909354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.840 [2024-10-08 18:23:30.909390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a81f780 len:0x10000 key:0x183100 00:18:17.840 [2024-10-08 18:23:30.909418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.840 [2024-10-08 18:23:30.909455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a80f700 len:0x10000 key:0x183100 00:18:17.840 [2024-10-08 18:23:30.909487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.841 [2024-10-08 18:23:30.909523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a68f500 len:0x10000 key:0x183000 00:18:17.841 [2024-10-08 18:23:30.909551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.841 [2024-10-08 18:23:30.909588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a67f480 len:0x10000 key:0x183000 00:18:17.841 [2024-10-08 18:23:30.909616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.841 [2024-10-08 18:23:30.909652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a66f400 len:0x10000 key:0x183000 00:18:17.841 [2024-10-08 18:23:30.909680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.841 [2024-10-08 18:23:30.909716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a65f380 len:0x10000 key:0x183000 00:18:17.841 [2024-10-08 18:23:30.909744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.841 [2024-10-08 18:23:30.909779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a64f300 len:0x10000 key:0x183000 00:18:17.841 [2024-10-08 18:23:30.909807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.841 [2024-10-08 18:23:30.909843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a63f280 len:0x10000 key:0x183000 00:18:17.841 [2024-10-08 18:23:30.909871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.841 [2024-10-08 18:23:30.909908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a62f200 len:0x10000 key:0x183000 00:18:17.841 [2024-10-08 18:23:30.909936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.841 [2024-10-08 18:23:30.909973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a61f180 len:0x10000 key:0x183000 00:18:17.841 [2024-10-08 18:23:30.910011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.841 [2024-10-08 18:23:30.910049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a60f100 len:0x10000 key:0x183000 00:18:17.841 [2024-10-08 18:23:30.910076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.841 [2024-10-08 18:23:30.910113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001abf0000 len:0x10000 key:0x183200 00:18:17.841 [2024-10-08 18:23:30.910140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.841 [2024-10-08 18:23:30.910176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001abdff80 len:0x10000 key:0x183200 00:18:17.841 [2024-10-08 18:23:30.910207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.841 [2024-10-08 18:23:30.910244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001abcff00 len:0x10000 key:0x183200 00:18:17.841 [2024-10-08 18:23:30.910271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.841 [2024-10-08 18:23:30.910307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001abbfe80 len:0x10000 key:0x183200 00:18:17.841 [2024-10-08 18:23:30.910335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.841 [2024-10-08 18:23:30.910371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001abafe00 len:0x10000 key:0x183200 00:18:17.841 [2024-10-08 18:23:30.910399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.841 [2024-10-08 18:23:30.910436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ab9fd80 len:0x10000 key:0x183200 00:18:17.841 [2024-10-08 18:23:30.910463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.841 [2024-10-08 18:23:30.910499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ab8fd00 len:0x10000 key:0x183200 00:18:17.841 [2024-10-08 18:23:30.910527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.841 [2024-10-08 18:23:30.910563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ab7fc80 len:0x10000 key:0x183200 00:18:17.841 [2024-10-08 18:23:30.910591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.841 [2024-10-08 18:23:30.910627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ab6fc00 len:0x10000 key:0x183200 00:18:17.841 [2024-10-08 18:23:30.910655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.841 [2024-10-08 18:23:30.910691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012150000 len:0x10000 key:0x184500 00:18:17.841 [2024-10-08 18:23:30.910719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.841 [2024-10-08 18:23:30.910758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f3f0000 len:0x10000 key:0x184500 00:18:17.841 [2024-10-08 18:23:30.910786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.841 [2024-10-08 18:23:30.910824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000118ef000 len:0x10000 key:0x184500 00:18:17.841 [2024-10-08 18:23:30.910851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.841 [2024-10-08 18:23:30.910889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000118ce000 len:0x10000 key:0x184500 00:18:17.841 [2024-10-08 18:23:30.910917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.841 [2024-10-08 18:23:30.910958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000118ad000 len:0x10000 key:0x184500 00:18:17.841 [2024-10-08 18:23:30.910986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.841 [2024-10-08 18:23:30.911035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001188c000 len:0x10000 key:0x184500 00:18:17.841 [2024-10-08 18:23:30.911063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.841 [2024-10-08 18:23:30.911101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001186b000 len:0x10000 key:0x184500 00:18:17.841 [2024-10-08 18:23:30.911129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.841 [2024-10-08 18:23:30.911179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001184a000 len:0x10000 key:0x184500 00:18:17.841 [2024-10-08 18:23:30.911201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.841 [2024-10-08 18:23:30.911230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011829000 len:0x10000 key:0x184500 00:18:17.841 [2024-10-08 18:23:30.911252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.841 [2024-10-08 18:23:30.911281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011808000 len:0x10000 key:0x184500 00:18:17.841 [2024-10-08 18:23:30.911302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.841 [2024-10-08 18:23:30.911331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000117e7000 len:0x10000 key:0x184500 00:18:17.841 [2024-10-08 18:23:30.911352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.841 [2024-10-08 18:23:30.911382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000117c6000 len:0x10000 key:0x184500 00:18:17.841 [2024-10-08 18:23:30.911403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.841 [2024-10-08 18:23:30.911431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000117a5000 len:0x10000 key:0x184500 00:18:17.841 [2024-10-08 18:23:30.911453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.841 [2024-10-08 18:23:30.911481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011784000 len:0x10000 key:0x184500 00:18:17.841 [2024-10-08 18:23:30.911503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.841 [2024-10-08 18:23:30.911531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011763000 len:0x10000 key:0x184500 00:18:17.841 [2024-10-08 18:23:30.911552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.841 [2024-10-08 18:23:30.911584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011742000 len:0x10000 key:0x184500 00:18:17.841 [2024-10-08 18:23:30.911605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.841 [2024-10-08 18:23:30.911634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c45f000 len:0x10000 key:0x184500 00:18:17.841 [2024-10-08 18:23:30.911655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.841 [2024-10-08 18:23:30.911683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c43e000 len:0x10000 key:0x184500 00:18:17.841 [2024-10-08 18:23:30.911705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.842 [2024-10-08 18:23:30.911734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c41d000 len:0x10000 key:0x184500 00:18:17.842 [2024-10-08 18:23:30.911755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.842 [2024-10-08 18:23:30.911783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3fc000 len:0x10000 key:0x184500 00:18:17.842 [2024-10-08 18:23:30.911805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.842 [2024-10-08 18:23:30.911834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3db000 len:0x10000 key:0x184500 00:18:17.842 [2024-10-08 18:23:30.911855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.842 [2024-10-08 18:23:30.911885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3ba000 len:0x10000 key:0x184500 00:18:17.842 [2024-10-08 18:23:30.911907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.842 [2024-10-08 18:23:30.911936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c399000 len:0x10000 key:0x184500 00:18:17.842 [2024-10-08 18:23:30.911957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.842 [2024-10-08 18:23:30.911986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c378000 len:0x10000 key:0x184500 00:18:17.842 [2024-10-08 18:23:30.912014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.842 [2024-10-08 18:23:30.912043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c357000 len:0x10000 key:0x184500 00:18:17.842 [2024-10-08 18:23:30.912064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.842 [2024-10-08 18:23:30.912092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c336000 len:0x10000 key:0x184500 00:18:17.842 [2024-10-08 18:23:30.912114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.842 [2024-10-08 18:23:30.912142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c315000 len:0x10000 key:0x184500 00:18:17.842 [2024-10-08 18:23:30.912165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.842 [2024-10-08 18:23:30.912194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c2f4000 len:0x10000 key:0x184500 00:18:17.842 [2024-10-08 18:23:30.912217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.842 [2024-10-08 18:23:30.912247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c2d3000 len:0x10000 key:0x184500 00:18:17.842 [2024-10-08 18:23:30.912268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.842 [2024-10-08 18:23:30.912296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c2b2000 len:0x10000 key:0x184500 00:18:17.842 [2024-10-08 18:23:30.912317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.842 [2024-10-08 18:23:30.912346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c291000 len:0x10000 key:0x184500 00:18:17.842 [2024-10-08 18:23:30.912367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.842 [2024-10-08 18:23:30.912396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c270000 len:0x10000 key:0x184500 00:18:17.842 [2024-10-08 18:23:30.912417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.842 [2024-10-08 18:23:30.915859] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200013801e00 was disconnected and freed. reset controller. 00:18:17.842 [2024-10-08 18:23:30.915890] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:17.842 [2024-10-08 18:23:30.915917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac4f900 len:0x10000 key:0x184100 00:18:17.842 [2024-10-08 18:23:30.915937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.842 [2024-10-08 18:23:30.915980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac3f880 len:0x10000 key:0x184100 00:18:17.842 [2024-10-08 18:23:30.916008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.842 [2024-10-08 18:23:30.916036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac2f800 len:0x10000 key:0x184100 00:18:17.842 [2024-10-08 18:23:30.916056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.842 [2024-10-08 18:23:30.916083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac1f780 len:0x10000 key:0x184100 00:18:17.842 [2024-10-08 18:23:30.916104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.842 [2024-10-08 18:23:30.916132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac0f700 len:0x10000 key:0x184100 00:18:17.842 [2024-10-08 18:23:30.916156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.842 [2024-10-08 18:23:30.916185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aff0000 len:0x10000 key:0x183600 00:18:17.842 [2024-10-08 18:23:30.916207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.842 [2024-10-08 18:23:30.916234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afdff80 len:0x10000 key:0x183600 00:18:17.842 [2024-10-08 18:23:30.916255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.842 [2024-10-08 18:23:30.916282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afcff00 len:0x10000 key:0x183600 00:18:17.842 [2024-10-08 18:23:30.916302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.842 [2024-10-08 18:23:30.916330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afbfe80 len:0x10000 key:0x183600 00:18:17.842 [2024-10-08 18:23:30.916351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.842 [2024-10-08 18:23:30.916379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afafe00 len:0x10000 key:0x183600 00:18:17.842 [2024-10-08 18:23:30.916400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.842 [2024-10-08 18:23:30.916427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af9fd80 len:0x10000 key:0x183600 00:18:17.842 [2024-10-08 18:23:30.916448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.842 [2024-10-08 18:23:30.916475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af8fd00 len:0x10000 key:0x183600 00:18:17.842 [2024-10-08 18:23:30.916496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.842 [2024-10-08 18:23:30.916523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af7fc80 len:0x10000 key:0x183600 00:18:17.842 [2024-10-08 18:23:30.916544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.842 [2024-10-08 18:23:30.916572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af6fc00 len:0x10000 key:0x183600 00:18:17.842 [2024-10-08 18:23:30.916593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.842 [2024-10-08 18:23:30.916620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af5fb80 len:0x10000 key:0x183600 00:18:17.842 [2024-10-08 18:23:30.916641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.842 [2024-10-08 18:23:30.916669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af4fb00 len:0x10000 key:0x183600 00:18:17.842 [2024-10-08 18:23:30.916690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.842 [2024-10-08 18:23:30.916720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af3fa80 len:0x10000 key:0x183600 00:18:17.842 [2024-10-08 18:23:30.916741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.842 [2024-10-08 18:23:30.916769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af2fa00 len:0x10000 key:0x183600 00:18:17.842 [2024-10-08 18:23:30.916790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.842 [2024-10-08 18:23:30.916818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af1f980 len:0x10000 key:0x183600 00:18:17.842 [2024-10-08 18:23:30.916839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.842 [2024-10-08 18:23:30.916867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af0f900 len:0x10000 key:0x183600 00:18:17.842 [2024-10-08 18:23:30.916888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.842 [2024-10-08 18:23:30.916915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aeff880 len:0x10000 key:0x183600 00:18:17.842 [2024-10-08 18:23:30.916937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.842 [2024-10-08 18:23:30.916964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aeef800 len:0x10000 key:0x183600 00:18:17.842 [2024-10-08 18:23:30.916986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.842 [2024-10-08 18:23:30.917029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aedf780 len:0x10000 key:0x183600 00:18:17.842 [2024-10-08 18:23:30.917051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.842 [2024-10-08 18:23:30.917078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aecf700 len:0x10000 key:0x183600 00:18:17.843 [2024-10-08 18:23:30.917098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.843 [2024-10-08 18:23:30.917126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aebf680 len:0x10000 key:0x183600 00:18:17.843 [2024-10-08 18:23:30.917147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.843 [2024-10-08 18:23:30.917174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aeaf600 len:0x10000 key:0x183600 00:18:17.843 [2024-10-08 18:23:30.917195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.843 [2024-10-08 18:23:30.917223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae9f580 len:0x10000 key:0x183600 00:18:17.843 [2024-10-08 18:23:30.917245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.843 [2024-10-08 18:23:30.917276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae8f500 len:0x10000 key:0x183600 00:18:17.843 [2024-10-08 18:23:30.917298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.843 [2024-10-08 18:23:30.917326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae7f480 len:0x10000 key:0x183600 00:18:17.843 [2024-10-08 18:23:30.917348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.843 [2024-10-08 18:23:30.917375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae6f400 len:0x10000 key:0x183600 00:18:17.843 [2024-10-08 18:23:30.917396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.843 [2024-10-08 18:23:30.917424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae5f380 len:0x10000 key:0x183600 00:18:17.843 [2024-10-08 18:23:30.917446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.843 [2024-10-08 18:23:30.917474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae4f300 len:0x10000 key:0x183600 00:18:17.843 [2024-10-08 18:23:30.917495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.843 [2024-10-08 18:23:30.917523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae3f280 len:0x10000 key:0x183600 00:18:17.843 [2024-10-08 18:23:30.917544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.843 [2024-10-08 18:23:30.917571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae2f200 len:0x10000 key:0x183600 00:18:17.843 [2024-10-08 18:23:30.917592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.843 [2024-10-08 18:23:30.917620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae1f180 len:0x10000 key:0x183600 00:18:17.843 [2024-10-08 18:23:30.917642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.843 [2024-10-08 18:23:30.917670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae0f100 len:0x10000 key:0x183600 00:18:17.843 [2024-10-08 18:23:30.917691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.843 [2024-10-08 18:23:30.917718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1f0000 len:0x10000 key:0x183500 00:18:17.843 [2024-10-08 18:23:30.917739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.843 [2024-10-08 18:23:30.917766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1dff80 len:0x10000 key:0x183500 00:18:17.843 [2024-10-08 18:23:30.917787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.843 [2024-10-08 18:23:30.917814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1cff00 len:0x10000 key:0x183500 00:18:17.843 [2024-10-08 18:23:30.917843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.843 [2024-10-08 18:23:30.917872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1bfe80 len:0x10000 key:0x183500 00:18:17.843 [2024-10-08 18:23:30.917893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.843 [2024-10-08 18:23:30.917920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:46080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1afe00 len:0x10000 key:0x183500 00:18:17.843 [2024-10-08 18:23:30.917942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.843 [2024-10-08 18:23:30.917970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000113a6000 len:0x10000 key:0x184500 00:18:17.843 [2024-10-08 18:23:30.917992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.843 [2024-10-08 18:23:30.918032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000113c7000 len:0x10000 key:0x184500 00:18:17.843 [2024-10-08 18:23:30.918054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.843 [2024-10-08 18:23:30.918084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000134a6000 len:0x10000 key:0x184500 00:18:17.843 [2024-10-08 18:23:30.918105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.843 [2024-10-08 18:23:30.918139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000134c7000 len:0x10000 key:0x184500 00:18:17.843 [2024-10-08 18:23:30.918161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.843 [2024-10-08 18:23:30.918189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d5e7000 len:0x10000 key:0x184500 00:18:17.843 [2024-10-08 18:23:30.918210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.843 [2024-10-08 18:23:30.918239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d608000 len:0x10000 key:0x184500 00:18:17.843 [2024-10-08 18:23:30.918260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.843 [2024-10-08 18:23:30.918288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011a39000 len:0x10000 key:0x184500 00:18:17.843 [2024-10-08 18:23:30.918310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.843 [2024-10-08 18:23:30.918338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011a18000 len:0x10000 key:0x184500 00:18:17.843 [2024-10-08 18:23:30.918360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.843 [2024-10-08 18:23:30.918388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000119d6000 len:0x10000 key:0x184500 00:18:17.843 [2024-10-08 18:23:30.918412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.843 [2024-10-08 18:23:30.918441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013296000 len:0x10000 key:0x184500 00:18:17.843 [2024-10-08 18:23:30.918463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.843 [2024-10-08 18:23:30.918492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013275000 len:0x10000 key:0x184500 00:18:17.843 [2024-10-08 18:23:30.918513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.843 [2024-10-08 18:23:30.918541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013254000 len:0x10000 key:0x184500 00:18:17.843 [2024-10-08 18:23:30.918563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.843 [2024-10-08 18:23:30.918591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013233000 len:0x10000 key:0x184500 00:18:17.843 [2024-10-08 18:23:30.918612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.843 [2024-10-08 18:23:30.918640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013212000 len:0x10000 key:0x184500 00:18:17.843 [2024-10-08 18:23:30.918660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.844 [2024-10-08 18:23:30.918689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000131f1000 len:0x10000 key:0x184500 00:18:17.844 [2024-10-08 18:23:30.918710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.844 [2024-10-08 18:23:30.918738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000131d0000 len:0x10000 key:0x184500 00:18:17.844 [2024-10-08 18:23:30.918759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.844 [2024-10-08 18:23:30.918788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010c0b000 len:0x10000 key:0x184500 00:18:17.844 [2024-10-08 18:23:30.918809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.844 [2024-10-08 18:23:30.918838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010bea000 len:0x10000 key:0x184500 00:18:17.844 [2024-10-08 18:23:30.918859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.844 [2024-10-08 18:23:30.918888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010bc9000 len:0x10000 key:0x184500 00:18:17.844 [2024-10-08 18:23:30.918908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.844 [2024-10-08 18:23:30.918939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010ba8000 len:0x10000 key:0x184500 00:18:17.844 [2024-10-08 18:23:30.918960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.844 [2024-10-08 18:23:30.918992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010b87000 len:0x10000 key:0x184500 00:18:17.844 [2024-10-08 18:23:30.919022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.844 [2024-10-08 18:23:30.919051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010b66000 len:0x10000 key:0x184500 00:18:17.844 [2024-10-08 18:23:30.919072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.844 [2024-10-08 18:23:30.919100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010b45000 len:0x10000 key:0x184500 00:18:17.844 [2024-10-08 18:23:30.919122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.844 [2024-10-08 18:23:30.922520] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200013801b40 was disconnected and freed. reset controller. 00:18:17.844 [2024-10-08 18:23:30.922566] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:17.844 [2024-10-08 18:23:30.922607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2dfd80 len:0x10000 key:0x183b00 00:18:17.844 [2024-10-08 18:23:30.922640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.844 [2024-10-08 18:23:30.922687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2cfd00 len:0x10000 key:0x183b00 00:18:17.844 [2024-10-08 18:23:30.922719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.844 [2024-10-08 18:23:30.922763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2bfc80 len:0x10000 key:0x183b00 00:18:17.844 [2024-10-08 18:23:30.922795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.844 [2024-10-08 18:23:30.922838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2afc00 len:0x10000 key:0x183b00 00:18:17.844 [2024-10-08 18:23:30.922872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.844 [2024-10-08 18:23:30.922915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b29fb80 len:0x10000 key:0x183b00 00:18:17.844 [2024-10-08 18:23:30.922948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.844 [2024-10-08 18:23:30.922991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b28fb00 len:0x10000 key:0x183b00 00:18:17.844 [2024-10-08 18:23:30.923034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.844 [2024-10-08 18:23:30.923074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b27fa80 len:0x10000 key:0x183b00 00:18:17.844 [2024-10-08 18:23:30.923090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.844 [2024-10-08 18:23:30.923112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b26fa00 len:0x10000 key:0x183b00 00:18:17.844 [2024-10-08 18:23:30.923132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.844 [2024-10-08 18:23:30.923153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b25f980 len:0x10000 key:0x183b00 00:18:17.844 [2024-10-08 18:23:30.923169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.844 [2024-10-08 18:23:30.923191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b24f900 len:0x10000 key:0x183b00 00:18:17.844 [2024-10-08 18:23:30.923207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.844 [2024-10-08 18:23:30.923233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b23f880 len:0x10000 key:0x183b00 00:18:17.844 [2024-10-08 18:23:30.923250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.844 [2024-10-08 18:23:30.923272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b22f800 len:0x10000 key:0x183b00 00:18:17.844 [2024-10-08 18:23:30.923289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.844 [2024-10-08 18:23:30.923311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b21f780 len:0x10000 key:0x183b00 00:18:17.844 [2024-10-08 18:23:30.923327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.844 [2024-10-08 18:23:30.923350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b20f700 len:0x10000 key:0x183b00 00:18:17.844 [2024-10-08 18:23:30.923367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.844 [2024-10-08 18:23:30.923388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b03f280 len:0x10000 key:0x183500 00:18:17.844 [2024-10-08 18:23:30.923405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.844 [2024-10-08 18:23:30.923426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b02f200 len:0x10000 key:0x183500 00:18:17.844 [2024-10-08 18:23:30.923442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.844 [2024-10-08 18:23:30.923463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b01f180 len:0x10000 key:0x183500 00:18:17.844 [2024-10-08 18:23:30.923480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.844 [2024-10-08 18:23:30.923501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b00f100 len:0x10000 key:0x183500 00:18:17.844 [2024-10-08 18:23:30.923517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.844 [2024-10-08 18:23:30.923539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5f0000 len:0x10000 key:0x183e00 00:18:17.844 [2024-10-08 18:23:30.923559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.844 [2024-10-08 18:23:30.923581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5dff80 len:0x10000 key:0x183e00 00:18:17.844 [2024-10-08 18:23:30.923613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.844 [2024-10-08 18:23:30.923635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5cff00 len:0x10000 key:0x183e00 00:18:17.844 [2024-10-08 18:23:30.923652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.844 [2024-10-08 18:23:30.923673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5bfe80 len:0x10000 key:0x183e00 00:18:17.844 [2024-10-08 18:23:30.923689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.844 [2024-10-08 18:23:30.923711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5afe00 len:0x10000 key:0x183e00 00:18:17.844 [2024-10-08 18:23:30.923727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.844 [2024-10-08 18:23:30.923749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b59fd80 len:0x10000 key:0x183e00 00:18:17.844 [2024-10-08 18:23:30.923766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.844 [2024-10-08 18:23:30.923788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b58fd00 len:0x10000 key:0x183e00 00:18:17.844 [2024-10-08 18:23:30.923804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.845 [2024-10-08 18:23:30.923825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010dd9000 len:0x10000 key:0x184500 00:18:17.845 [2024-10-08 18:23:30.923842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.845 [2024-10-08 18:23:30.923866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000115d7000 len:0x10000 key:0x184500 00:18:17.845 [2024-10-08 18:23:30.923882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.845 [2024-10-08 18:23:30.923904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b4c6000 len:0x10000 key:0x184500 00:18:17.845 [2024-10-08 18:23:30.923920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.845 [2024-10-08 18:23:30.923942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b4e7000 len:0x10000 key:0x184500 00:18:17.845 [2024-10-08 18:23:30.923958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.845 [2024-10-08 18:23:30.923981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000da07000 len:0x10000 key:0x184500 00:18:17.845 [2024-10-08 18:23:30.923997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.845 [2024-10-08 18:23:30.924028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000da28000 len:0x10000 key:0x184500 00:18:17.845 [2024-10-08 18:23:30.924047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.845 [2024-10-08 18:23:30.924082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012657000 len:0x10000 key:0x184500 00:18:17.845 [2024-10-08 18:23:30.924100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.845 [2024-10-08 18:23:30.924123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012678000 len:0x10000 key:0x184500 00:18:17.845 [2024-10-08 18:23:30.924140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.845 [2024-10-08 18:23:30.924162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010f86000 len:0x10000 key:0x184500 00:18:17.845 [2024-10-08 18:23:30.924178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.845 [2024-10-08 18:23:30.924201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011385000 len:0x10000 key:0x184500 00:18:17.845 [2024-10-08 18:23:30.924217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.845 [2024-10-08 18:23:30.924239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011364000 len:0x10000 key:0x184500 00:18:17.845 [2024-10-08 18:23:30.924255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.845 [2024-10-08 18:23:30.924278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011343000 len:0x10000 key:0x184500 00:18:17.845 [2024-10-08 18:23:30.924294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.845 [2024-10-08 18:23:30.924317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011322000 len:0x10000 key:0x184500 00:18:17.845 [2024-10-08 18:23:30.924333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.845 [2024-10-08 18:23:30.924355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011301000 len:0x10000 key:0x184500 00:18:17.845 [2024-10-08 18:23:30.924372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.845 [2024-10-08 18:23:30.924394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000112e0000 len:0x10000 key:0x184500 00:18:17.845 [2024-10-08 18:23:30.924410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.845 [2024-10-08 18:23:30.924434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000135cf000 len:0x10000 key:0x184500 00:18:17.845 [2024-10-08 18:23:30.924450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.845 [2024-10-08 18:23:30.924475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c64e000 len:0x10000 key:0x184500 00:18:17.845 [2024-10-08 18:23:30.924491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.845 [2024-10-08 18:23:30.924514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c62d000 len:0x10000 key:0x184500 00:18:17.845 [2024-10-08 18:23:30.924530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.845 [2024-10-08 18:23:30.924552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c60c000 len:0x10000 key:0x184500 00:18:17.845 [2024-10-08 18:23:30.924568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.845 [2024-10-08 18:23:30.924591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c5eb000 len:0x10000 key:0x184500 00:18:17.845 [2024-10-08 18:23:30.924607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.845 [2024-10-08 18:23:30.924629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c5ca000 len:0x10000 key:0x184500 00:18:17.845 [2024-10-08 18:23:30.924646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.845 [2024-10-08 18:23:30.924667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c5a9000 len:0x10000 key:0x184500 00:18:17.845 [2024-10-08 18:23:30.924684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.845 [2024-10-08 18:23:30.924705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c588000 len:0x10000 key:0x184500 00:18:17.845 [2024-10-08 18:23:30.924722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.845 [2024-10-08 18:23:30.924748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c567000 len:0x10000 key:0x184500 00:18:17.845 [2024-10-08 18:23:30.924765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.845 [2024-10-08 18:23:30.924788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c546000 len:0x10000 key:0x184500 00:18:17.845 [2024-10-08 18:23:30.924804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.845 [2024-10-08 18:23:30.924827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c525000 len:0x10000 key:0x184500 00:18:17.845 [2024-10-08 18:23:30.924843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.845 [2024-10-08 18:23:30.924865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c504000 len:0x10000 key:0x184500 00:18:17.845 [2024-10-08 18:23:30.924881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.845 [2024-10-08 18:23:30.924904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c4e3000 len:0x10000 key:0x184500 00:18:17.845 [2024-10-08 18:23:30.924922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.845 [2024-10-08 18:23:30.924945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c4c2000 len:0x10000 key:0x184500 00:18:17.845 [2024-10-08 18:23:30.924961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.845 [2024-10-08 18:23:30.924983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c4a1000 len:0x10000 key:0x184500 00:18:17.845 [2024-10-08 18:23:30.925025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.845 [2024-10-08 18:23:30.925048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c480000 len:0x10000 key:0x184500 00:18:17.845 [2024-10-08 18:23:30.925065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.845 [2024-10-08 18:23:30.925087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013086000 len:0x10000 key:0x184500 00:18:17.845 [2024-10-08 18:23:30.925103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.845 [2024-10-08 18:23:30.925125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010d13000 len:0x10000 key:0x184500 00:18:17.845 [2024-10-08 18:23:30.925143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.845 [2024-10-08 18:23:30.925166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010cf2000 len:0x10000 key:0x184500 00:18:17.845 [2024-10-08 18:23:30.925182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.845 [2024-10-08 18:23:30.925206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010cd1000 len:0x10000 key:0x184500 00:18:17.845 [2024-10-08 18:23:30.925222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.845 [2024-10-08 18:23:30.925244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010cb0000 len:0x10000 key:0x184500 00:18:17.845 [2024-10-08 18:23:30.925260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.845 [2024-10-08 18:23:30.925282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000116df000 len:0x10000 key:0x184500 00:18:17.845 [2024-10-08 18:23:30.925299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.845 [2024-10-08 18:23:30.925321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec97000 len:0x10000 key:0x184500 00:18:17.845 [2024-10-08 18:23:30.925338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.846 [2024-10-08 18:23:30.925361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ecb8000 len:0x10000 key:0x184500 00:18:17.846 [2024-10-08 18:23:30.925380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.846 [2024-10-08 18:23:30.928333] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200013801880 was disconnected and freed. reset controller. 00:18:17.846 [2024-10-08 18:23:30.928379] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:17.846 [2024-10-08 18:23:30.928421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9f0000 len:0x10000 key:0x183400 00:18:17.846 [2024-10-08 18:23:30.928455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.846 [2024-10-08 18:23:30.928502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9dff80 len:0x10000 key:0x183400 00:18:17.846 [2024-10-08 18:23:30.928535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.846 [2024-10-08 18:23:30.928578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9cff00 len:0x10000 key:0x183400 00:18:17.846 [2024-10-08 18:23:30.928611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.846 [2024-10-08 18:23:30.928654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9bfe80 len:0x10000 key:0x183400 00:18:17.846 [2024-10-08 18:23:30.928687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.846 [2024-10-08 18:23:30.928732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9afe00 len:0x10000 key:0x183400 00:18:17.846 [2024-10-08 18:23:30.928765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.846 [2024-10-08 18:23:30.928808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b99fd80 len:0x10000 key:0x183400 00:18:17.846 [2024-10-08 18:23:30.928841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.846 [2024-10-08 18:23:30.928885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b98fd00 len:0x10000 key:0x183400 00:18:17.846 [2024-10-08 18:23:30.928918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.846 [2024-10-08 18:23:30.928970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b97fc80 len:0x10000 key:0x183400 00:18:17.846 [2024-10-08 18:23:30.928987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.846 [2024-10-08 18:23:30.929028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b96fc00 len:0x10000 key:0x183400 00:18:17.846 [2024-10-08 18:23:30.929045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.846 [2024-10-08 18:23:30.929067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b95fb80 len:0x10000 key:0x183400 00:18:17.846 [2024-10-08 18:23:30.929083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.846 [2024-10-08 18:23:30.929107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b94fb00 len:0x10000 key:0x183400 00:18:17.846 [2024-10-08 18:23:30.929124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.846 [2024-10-08 18:23:30.929145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b93fa80 len:0x10000 key:0x183400 00:18:17.846 [2024-10-08 18:23:30.929161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.846 [2024-10-08 18:23:30.929183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b92fa00 len:0x10000 key:0x183400 00:18:17.846 [2024-10-08 18:23:30.929198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.846 [2024-10-08 18:23:30.929219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001229a000 len:0x10000 key:0x184500 00:18:17.846 [2024-10-08 18:23:30.929235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.846 [2024-10-08 18:23:30.929257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000122bb000 len:0x10000 key:0x184500 00:18:17.846 [2024-10-08 18:23:30.929273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.846 [2024-10-08 18:23:30.929295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000122dc000 len:0x10000 key:0x184500 00:18:17.846 [2024-10-08 18:23:30.929311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.846 [2024-10-08 18:23:30.929333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000122fd000 len:0x10000 key:0x184500 00:18:17.846 [2024-10-08 18:23:30.929349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.846 [2024-10-08 18:23:30.929371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001231e000 len:0x10000 key:0x184500 00:18:17.846 [2024-10-08 18:23:30.929386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.846 [2024-10-08 18:23:30.929408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001233f000 len:0x10000 key:0x184500 00:18:17.846 [2024-10-08 18:23:30.929424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.846 [2024-10-08 18:23:30.929445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011f40000 len:0x10000 key:0x184500 00:18:17.846 [2024-10-08 18:23:30.929462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.846 [2024-10-08 18:23:30.929483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011f61000 len:0x10000 key:0x184500 00:18:17.846 [2024-10-08 18:23:30.929498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.846 [2024-10-08 18:23:30.929528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011f82000 len:0x10000 key:0x184500 00:18:17.846 [2024-10-08 18:23:30.929547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.846 [2024-10-08 18:23:30.929568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011fa3000 len:0x10000 key:0x184500 00:18:17.846 [2024-10-08 18:23:30.929585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.846 [2024-10-08 18:23:30.929606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd8a000 len:0x10000 key:0x184500 00:18:17.846 [2024-10-08 18:23:30.929623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.846 [2024-10-08 18:23:30.929645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bdab000 len:0x10000 key:0x184500 00:18:17.846 [2024-10-08 18:23:30.929662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.846 [2024-10-08 18:23:30.929684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bdcc000 len:0x10000 key:0x184500 00:18:17.846 [2024-10-08 18:23:30.929702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.846 [2024-10-08 18:23:30.929725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012279000 len:0x10000 key:0x184500 00:18:17.846 [2024-10-08 18:23:30.929741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.846 [2024-10-08 18:23:30.929763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012258000 len:0x10000 key:0x184500 00:18:17.846 [2024-10-08 18:23:30.929779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.846 [2024-10-08 18:23:30.929801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012237000 len:0x10000 key:0x184500 00:18:17.846 [2024-10-08 18:23:30.929818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.846 [2024-10-08 18:23:30.929840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca6e000 len:0x10000 key:0x184500 00:18:17.846 [2024-10-08 18:23:30.929855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.846 [2024-10-08 18:23:30.929878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca4d000 len:0x10000 key:0x184500 00:18:17.846 [2024-10-08 18:23:30.929894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.846 [2024-10-08 18:23:30.929916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca2c000 len:0x10000 key:0x184500 00:18:17.846 [2024-10-08 18:23:30.929932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.846 [2024-10-08 18:23:30.929960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca0b000 len:0x10000 key:0x184500 00:18:17.846 [2024-10-08 18:23:30.929987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.846 [2024-10-08 18:23:30.930016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c9ea000 len:0x10000 key:0x184500 00:18:17.846 [2024-10-08 18:23:30.930033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.846 [2024-10-08 18:23:30.930055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c9c9000 len:0x10000 key:0x184500 00:18:17.846 [2024-10-08 18:23:30.930072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.846 [2024-10-08 18:23:30.930094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c9a8000 len:0x10000 key:0x184500 00:18:17.846 [2024-10-08 18:23:30.930111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.847 [2024-10-08 18:23:30.930132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c987000 len:0x10000 key:0x184500 00:18:17.847 [2024-10-08 18:23:30.930149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.847 [2024-10-08 18:23:30.930171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c966000 len:0x10000 key:0x184500 00:18:17.847 [2024-10-08 18:23:30.930187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.847 [2024-10-08 18:23:30.930209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c945000 len:0x10000 key:0x184500 00:18:17.847 [2024-10-08 18:23:30.930226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.847 [2024-10-08 18:23:30.930247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c924000 len:0x10000 key:0x184500 00:18:17.847 [2024-10-08 18:23:30.930264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.847 [2024-10-08 18:23:30.930286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c903000 len:0x10000 key:0x184500 00:18:17.847 [2024-10-08 18:23:30.930303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.847 [2024-10-08 18:23:30.930325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c8e2000 len:0x10000 key:0x184500 00:18:17.847 [2024-10-08 18:23:30.930342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.847 [2024-10-08 18:23:30.930367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c8c1000 len:0x10000 key:0x184500 00:18:17.847 [2024-10-08 18:23:30.930384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.847 [2024-10-08 18:23:30.930405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c8a0000 len:0x10000 key:0x184500 00:18:17.847 [2024-10-08 18:23:30.930422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.847 [2024-10-08 18:23:30.930447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000135f0000 len:0x10000 key:0x184500 00:18:17.847 [2024-10-08 18:23:30.930463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.847 [2024-10-08 18:23:30.930486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b610000 len:0x10000 key:0x184500 00:18:17.847 [2024-10-08 18:23:30.930502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.847 [2024-10-08 18:23:30.930525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ceaf000 len:0x10000 key:0x184500 00:18:17.847 [2024-10-08 18:23:30.930541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.847 [2024-10-08 18:23:30.930564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce8e000 len:0x10000 key:0x184500 00:18:17.847 [2024-10-08 18:23:30.930580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.847 [2024-10-08 18:23:30.930606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce6d000 len:0x10000 key:0x184500 00:18:17.847 [2024-10-08 18:23:30.930624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.847 [2024-10-08 18:23:30.930645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce4c000 len:0x10000 key:0x184500 00:18:17.847 [2024-10-08 18:23:30.930662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.847 [2024-10-08 18:23:30.930685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce2b000 len:0x10000 key:0x184500 00:18:17.847 [2024-10-08 18:23:30.930701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.847 [2024-10-08 18:23:30.930724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce0a000 len:0x10000 key:0x184500 00:18:17.847 [2024-10-08 18:23:30.930741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.847 [2024-10-08 18:23:30.930763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cde9000 len:0x10000 key:0x184500 00:18:17.847 [2024-10-08 18:23:30.930779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.847 [2024-10-08 18:23:30.930801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cdc8000 len:0x10000 key:0x184500 00:18:17.847 [2024-10-08 18:23:30.930818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.847 [2024-10-08 18:23:30.930840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cda7000 len:0x10000 key:0x184500 00:18:17.847 [2024-10-08 18:23:30.930856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.847 [2024-10-08 18:23:30.930880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cd86000 len:0x10000 key:0x184500 00:18:17.847 [2024-10-08 18:23:30.930897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.847 [2024-10-08 18:23:30.930919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cd65000 len:0x10000 key:0x184500 00:18:17.847 [2024-10-08 18:23:30.930936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.847 [2024-10-08 18:23:30.930957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cd44000 len:0x10000 key:0x184500 00:18:17.847 [2024-10-08 18:23:30.930974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.847 [2024-10-08 18:23:30.930997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cd23000 len:0x10000 key:0x184500 00:18:17.847 [2024-10-08 18:23:30.931021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.847 [2024-10-08 18:23:30.931043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cd02000 len:0x10000 key:0x184500 00:18:17.847 [2024-10-08 18:23:30.931059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.847 [2024-10-08 18:23:30.931085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cce1000 len:0x10000 key:0x184500 00:18:17.847 [2024-10-08 18:23:30.931103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.847 [2024-10-08 18:23:30.931125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013485000 len:0x10000 key:0x184500 00:18:17.847 [2024-10-08 18:23:30.931141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.847 [2024-10-08 18:23:30.931174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013464000 len:0x10000 key:0x184500 00:18:17.847 [2024-10-08 18:23:30.931188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.847 [2024-10-08 18:23:30.931207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013443000 len:0x10000 key:0x184500 00:18:17.847 [2024-10-08 18:23:30.931221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:76603000 sqhd:7250 p:0 m:0 dnr:0 00:18:17.847 [2024-10-08 18:23:30.949711] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000138015c0 was disconnected and freed. reset controller. 00:18:17.847 [2024-10-08 18:23:30.949768] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:17.847 [2024-10-08 18:23:30.950053] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:17.847 [2024-10-08 18:23:30.950108] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:17.847 [2024-10-08 18:23:30.950151] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:17.847 [2024-10-08 18:23:30.950193] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:17.847 [2024-10-08 18:23:30.950241] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:17.847 [2024-10-08 18:23:30.950279] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:17.847 [2024-10-08 18:23:30.950292] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:17.847 [2024-10-08 18:23:30.950306] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:17.847 task offset: 24576 on job bdev=Nvme10n1 fails 00:18:17.847 00:18:17.847 Latency(us) 00:18:17.847 [2024-10-08T16:23:31.021Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.848 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:17.848 Job: Nvme1n1 ended in about 1.99 seconds with error 00:18:17.848 Verification LBA range: start 0x0 length 0x400 00:18:17.848 Nvme1n1 : 1.99 139.85 8.74 32.08 0.00 367285.96 9516.97 1079577.38 00:18:17.848 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:17.848 Job: Nvme2n1 ended in about 1.92 seconds with error 00:18:17.848 Verification LBA range: start 0x0 length 0x400 00:18:17.848 Nvme2n1 : 1.92 149.76 9.36 33.28 0.00 344234.32 10257.81 1086871.82 00:18:17.848 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:17.848 Job: Nvme3n1 ended in about 1.93 seconds with error 00:18:17.848 Verification LBA range: start 0x0 length 0x400 00:18:17.848 Nvme3n1 : 1.93 149.60 9.35 33.13 0.00 342136.02 20401.64 1094166.26 00:18:17.848 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:17.848 Job: Nvme4n1 ended in about 1.94 seconds with error 00:18:17.848 Verification LBA range: start 0x0 length 0x400 00:18:17.848 Nvme4n1 : 1.94 152.53 9.53 32.98 0.00 334293.24 6667.58 1094166.26 00:18:17.848 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:17.848 Job: Nvme5n1 ended in about 1.95 seconds with error 00:18:17.848 Verification LBA range: start 0x0 length 0x400 00:18:17.848 Nvme5n1 : 1.95 139.54 8.72 32.83 0.00 356708.51 34192.70 1094166.26 00:18:17.848 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:17.848 Job: Nvme6n1 ended in about 1.96 seconds with error 00:18:17.848 Verification LBA range: start 0x0 length 0x400 00:18:17.848 Nvme6n1 : 1.96 147.16 9.20 32.70 0.00 339116.42 40575.33 1094166.26 00:18:17.848 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:17.848 Job: Nvme7n1 ended in about 1.96 seconds with error 00:18:17.848 Verification LBA range: start 0x0 length 0x400 00:18:17.848 Nvme7n1 : 1.96 151.25 9.45 32.59 0.00 329448.04 50377.24 1094166.26 00:18:17.848 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:17.848 Job: Nvme8n1 ended in about 1.97 seconds with error 00:18:17.848 Verification LBA range: start 0x0 length 0x400 00:18:17.848 Nvme8n1 : 1.97 142.64 8.92 32.49 0.00 343154.92 54480.36 1086871.82 00:18:17.848 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:17.848 Job: Nvme9n1 ended in about 1.98 seconds with error 00:18:17.848 Verification LBA range: start 0x0 length 0x400 00:18:17.848 Nvme9n1 : 1.98 136.15 8.51 32.39 0.00 353481.62 44906.41 1086871.82 00:18:17.848 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:17.848 Job: Nvme10n1 ended in about 1.89 seconds with error 00:18:17.848 Verification LBA range: start 0x0 length 0x400 00:18:17.848 Nvme10n1 : 1.89 101.39 6.34 33.80 0.00 434688.89 70664.90 1064988.49 00:18:17.848 [2024-10-08T16:23:31.021Z] =================================================================================================================== 00:18:17.848 [2024-10-08T16:23:31.021Z] Total : 1409.88 88.12 328.28 0.00 352049.87 6667.58 1094166.26 00:18:17.848 [2024-10-08 18:23:30.976534] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:17.848 [2024-10-08 18:23:30.976576] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:18:17.848 [2024-10-08 18:23:30.976598] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:18:17.848 [2024-10-08 18:23:30.976608] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:18:17.848 [2024-10-08 18:23:30.976620] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:18:17.848 [2024-10-08 18:23:30.976630] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:18:17.848 [2024-10-08 18:23:30.976641] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:18:17.848 [2024-10-08 18:23:30.976651] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:18:17.848 [2024-10-08 18:23:30.977298] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:18:17.848 [2024-10-08 18:23:30.977420] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:18:17.848 [2024-10-08 18:23:30.977436] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:18:17.848 [2024-10-08 18:23:30.977445] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019a89000 00:18:17.848 [2024-10-08 18:23:30.991290] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:18:17.848 [2024-10-08 18:23:30.991358] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:18:17.848 [2024-10-08 18:23:30.991380] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019a9a640 00:18:17.848 [2024-10-08 18:23:30.991494] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:18:17.848 [2024-10-08 18:23:30.991522] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:18:17.848 [2024-10-08 18:23:30.991541] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019abf4c0 00:18:17.848 [2024-10-08 18:23:30.991657] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:18:17.848 [2024-10-08 18:23:30.991683] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:18:17.848 [2024-10-08 18:23:30.991703] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019abf180 00:18:17.848 [2024-10-08 18:23:30.991834] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:18:17.848 [2024-10-08 18:23:30.991862] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:18:17.848 [2024-10-08 18:23:30.991882] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019ad20c0 00:18:17.848 [2024-10-08 18:23:30.992037] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:18:17.848 [2024-10-08 18:23:30.992073] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:18:17.848 [2024-10-08 18:23:30.992098] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019ab9ac0 00:18:18.108 [2024-10-08 18:23:30.992228] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:18:18.108 [2024-10-08 18:23:30.992265] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:18:18.108 [2024-10-08 18:23:30.992290] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019aba2c0 00:18:18.108 [2024-10-08 18:23:30.992434] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:18:18.108 [2024-10-08 18:23:30.992478] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:18:18.108 [2024-10-08 18:23:30.992504] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019ae5280 00:18:18.108 [2024-10-08 18:23:30.992914] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:18:18.108 [2024-10-08 18:23:30.992955] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:18:18.108 [2024-10-08 18:23:30.992980] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019a9a040 00:18:18.367 18:23:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3439432 00:18:18.367 18:23:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:18:18.367 18:23:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3439432 00:18:18.367 18:23:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:18:18.367 18:23:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:18.367 18:23:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:18:18.367 18:23:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:18.367 18:23:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 3439432 00:18:18.936 [2024-10-08 18:23:31.801679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:18.936 [2024-10-08 18:23:31.801706] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:18.936 [2024-10-08 18:23:31.801746] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:18.936 [2024-10-08 18:23:31.801756] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:18.936 [2024-10-08 18:23:31.801767] nvme_ctrlr.c:1114:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:18:18.936 [2024-10-08 18:23:31.801841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:18.936 [2024-10-08 18:23:31.981664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:18.936 [2024-10-08 18:23:31.981685] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:18:18.936 [2024-10-08 18:23:31.981722] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:18:18.936 [2024-10-08 18:23:31.981731] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:18:18.936 [2024-10-08 18:23:31.981741] nvme_ctrlr.c:1114:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] already in failed state 00:18:18.936 [2024-10-08 18:23:31.981763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:18.936 [2024-10-08 18:23:31.995246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:18.936 [2024-10-08 18:23:31.995265] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:18:18.936 [2024-10-08 18:23:31.996377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:18.936 [2024-10-08 18:23:31.996392] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:18.936 [2024-10-08 18:23:31.997674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:18.936 [2024-10-08 18:23:31.997691] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:18:18.936 [2024-10-08 18:23:31.998895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:18.936 [2024-10-08 18:23:31.998936] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:18:18.936 [2024-10-08 18:23:32.000368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:18.936 [2024-10-08 18:23:32.000382] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:18:18.936 [2024-10-08 18:23:32.001717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:18.936 [2024-10-08 18:23:32.001757] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:18:18.936 [2024-10-08 18:23:32.002962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:18.936 [2024-10-08 18:23:32.003017] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:18:18.936 [2024-10-08 18:23:32.003047] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:18:18.936 [2024-10-08 18:23:32.003075] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:18:18.936 [2024-10-08 18:23:32.003106] nvme_ctrlr.c:1114:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] already in failed state 00:18:18.936 [2024-10-08 18:23:32.003195] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:18.936 [2024-10-08 18:23:32.004513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:18.936 [2024-10-08 18:23:32.004553] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:18:18.936 [2024-10-08 18:23:32.004579] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:18:18.936 [2024-10-08 18:23:32.004607] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:18:18.936 [2024-10-08 18:23:32.004636] nvme_ctrlr.c:1114:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] already in failed state 00:18:18.936 [2024-10-08 18:23:32.004671] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:18:18.936 [2024-10-08 18:23:32.004699] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:18:18.936 [2024-10-08 18:23:32.004727] nvme_ctrlr.c:1114:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] already in failed state 00:18:18.936 [2024-10-08 18:23:32.004760] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:18:18.936 [2024-10-08 18:23:32.004790] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:18:18.936 [2024-10-08 18:23:32.004818] nvme_ctrlr.c:1114:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] already in failed state 00:18:18.936 [2024-10-08 18:23:32.004831] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:18:18.936 [2024-10-08 18:23:32.004842] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:18:18.936 [2024-10-08 18:23:32.004853] nvme_ctrlr.c:1114:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] already in failed state 00:18:18.936 [2024-10-08 18:23:32.004866] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:18:18.936 [2024-10-08 18:23:32.004876] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:18:18.936 [2024-10-08 18:23:32.004891] nvme_ctrlr.c:1114:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] already in failed state 00:18:18.936 [2024-10-08 18:23:32.004904] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:18:18.936 [2024-10-08 18:23:32.004916] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:18:18.936 [2024-10-08 18:23:32.004926] nvme_ctrlr.c:1114:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] already in failed state 00:18:18.936 [2024-10-08 18:23:32.005014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:18.936 [2024-10-08 18:23:32.005030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:18.936 [2024-10-08 18:23:32.005042] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:18.936 [2024-10-08 18:23:32.005055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:18.936 [2024-10-08 18:23:32.005068] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:18.936 [2024-10-08 18:23:32.005080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:18.936 [2024-10-08 18:23:32.005092] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:18:18.936 [2024-10-08 18:23:32.005103] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:18:18.937 [2024-10-08 18:23:32.005114] nvme_ctrlr.c:1114:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] already in failed state 00:18:18.937 [2024-10-08 18:23:32.005183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:19.196 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:18:19.196 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:19.196 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:18:19.196 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:18:19.196 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:18:19.196 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:19.196 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:18:19.196 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:18:19.196 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:18:19.196 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:19.196 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:18:19.196 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:19.196 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:18:19.196 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:19.196 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:19.196 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:18:19.196 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:19.196 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:19.196 rmmod nvme_rdma 00:18:19.196 rmmod nvme_fabrics 00:18:19.196 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:19.196 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:18:19.196 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:18:19.196 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@515 -- # '[' -n 3439111 ']' 00:18:19.196 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # killprocess 3439111 00:18:19.196 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 3439111 ']' 00:18:19.196 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 3439111 00:18:19.196 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3439111) - No such process 00:18:19.196 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 3439111 is not found' 00:18:19.196 Process with pid 3439111 is not found 00:18:19.196 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:19.196 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:18:19.196 00:18:19.196 real 0m6.359s 00:18:19.196 user 0m19.397s 00:18:19.196 sys 0m1.551s 00:18:19.196 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:19.196 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:19.196 ************************************ 00:18:19.196 END TEST nvmf_shutdown_tc3 00:18:19.196 ************************************ 00:18:19.196 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ mlx5 == \e\8\1\0 ]] 00:18:19.196 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:18:19.196 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:19.196 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:19.196 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:18:19.196 ************************************ 00:18:19.196 START TEST nvmf_shutdown_tc4 00:18:19.196 ************************************ 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:19.197 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:19.457 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:19.457 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:19.457 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:19.457 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:19.457 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:19.457 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:18:19.457 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:18:19.457 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:19.457 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:19.457 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:19.457 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:19.457 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:19.457 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:19.457 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:19.457 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:18:19.457 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:18:19.457 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:19.457 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:18:19.458 Found net devices under 0000:18:00.0: mlx_0_0 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:18:19.458 Found net devices under 0000:18:00.1: mlx_0_1 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # is_hw=yes 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # rdma_device_init 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # uname 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@528 -- # allocate_nic_ips 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:19.458 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:19.458 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:18:19.458 altname enp24s0f0np0 00:18:19.458 altname ens785f0np0 00:18:19.458 inet 192.168.100.8/24 scope global mlx_0_0 00:18:19.458 valid_lft forever preferred_lft forever 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:19.458 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:19.458 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:18:19.458 altname enp24s0f1np1 00:18:19.458 altname ens785f1np1 00:18:19.458 inet 192.168.100.9/24 scope global mlx_0_1 00:18:19.458 valid_lft forever preferred_lft forever 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # return 0 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:19.458 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:19.459 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:19.459 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:19.459 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:19.459 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:19.459 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:18:19.459 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:19.459 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:19.459 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:19.459 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:19.459 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:19.459 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:19.459 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:18:19.459 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:19.459 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:19.459 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:19.459 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:19.459 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:19.459 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:19.459 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:19.459 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:19.459 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:19.459 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:19.459 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:19.459 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:19.459 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:18:19.459 192.168.100.9' 00:18:19.459 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:18:19.459 192.168.100.9' 00:18:19.459 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # head -n 1 00:18:19.459 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:19.459 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:18:19.459 192.168.100.9' 00:18:19.459 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # tail -n +2 00:18:19.459 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # head -n 1 00:18:19.459 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:19.459 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:18:19.459 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:19.459 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:18:19.459 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:18:19.459 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:18:19.459 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:18:19.459 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:19.459 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:19.459 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:18:19.719 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # nvmfpid=3440100 00:18:19.719 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # waitforlisten 3440100 00:18:19.719 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:18:19.719 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 3440100 ']' 00:18:19.719 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.719 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:19.719 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.719 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:19.719 18:23:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:18:19.719 [2024-10-08 18:23:32.679411] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:18:19.719 [2024-10-08 18:23:32.679477] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.719 [2024-10-08 18:23:32.767931] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:19.719 [2024-10-08 18:23:32.855160] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.719 [2024-10-08 18:23:32.855221] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.719 [2024-10-08 18:23:32.855230] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:19.719 [2024-10-08 18:23:32.855238] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:19.719 [2024-10-08 18:23:32.855245] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.719 [2024-10-08 18:23:32.856718] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.719 [2024-10-08 18:23:32.856822] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:18:19.719 [2024-10-08 18:23:32.856921] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.719 [2024-10-08 18:23:32.856923] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:18:20.658 18:23:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:20.658 18:23:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:18:20.658 18:23:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:20.658 18:23:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:20.658 18:23:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:18:20.658 18:23:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:20.658 18:23:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:20.658 18:23:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.658 18:23:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:18:20.658 [2024-10-08 18:23:33.614214] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ca85e0/0x1cacad0) succeed. 00:18:20.658 [2024-10-08 18:23:33.624747] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ca9c20/0x1cee170) succeed. 00:18:20.658 18:23:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.658 18:23:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:18:20.658 18:23:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:18:20.658 18:23:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:20.658 18:23:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:18:20.658 18:23:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:20.658 18:23:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:20.658 18:23:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:18:20.658 18:23:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:20.658 18:23:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:18:20.658 18:23:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:20.658 18:23:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:18:20.658 18:23:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:20.658 18:23:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:18:20.658 18:23:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:20.658 18:23:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:18:20.658 18:23:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:20.658 18:23:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:18:20.658 18:23:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:20.658 18:23:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:18:20.658 18:23:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:20.658 18:23:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:18:20.658 18:23:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:20.658 18:23:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:18:20.658 18:23:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:20.658 18:23:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:18:20.658 18:23:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:18:20.658 18:23:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.658 18:23:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:18:20.658 Malloc1 00:18:20.917 [2024-10-08 18:23:33.849005] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:20.917 Malloc2 00:18:20.917 Malloc3 00:18:20.917 Malloc4 00:18:20.917 Malloc5 00:18:20.917 Malloc6 00:18:21.177 Malloc7 00:18:21.177 Malloc8 00:18:21.177 Malloc9 00:18:21.177 Malloc10 00:18:21.177 18:23:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.177 18:23:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:18:21.177 18:23:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:21.177 18:23:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:18:21.177 18:23:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3440336 00:18:21.177 18:23:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:18:21.177 18:23:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' -P 4 00:18:21.436 [2024-10-08 18:23:34.398871] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:26.800 18:23:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:26.800 18:23:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3440100 00:18:26.800 18:23:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 3440100 ']' 00:18:26.800 18:23:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 3440100 00:18:26.800 18:23:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:18:26.800 18:23:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:26.800 18:23:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3440100 00:18:26.800 18:23:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:26.800 18:23:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:26.800 18:23:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3440100' 00:18:26.800 killing process with pid 3440100 00:18:26.800 18:23:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 3440100 00:18:26.800 18:23:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 3440100 00:18:26.800 NVMe io qpair process completion error 00:18:26.800 NVMe io qpair process completion error 00:18:26.800 NVMe io qpair process completion error 00:18:26.800 NVMe io qpair process completion error 00:18:26.800 NVMe io qpair process completion error 00:18:26.800 starting I/O failed: -6 00:18:26.800 NVMe io qpair process completion error 00:18:26.800 NVMe io qpair process completion error 00:18:27.060 18:23:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 starting I/O failed: -6 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.321 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 [2024-10-08 18:23:40.477372] nvme_ctrlr.c:4536:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Submitting Keep Alive failed 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 starting I/O failed: -6 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.322 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 [2024-10-08 18:23:40.486512] nvme_ctrlr.c:4536:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Submitting Keep Alive failed 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 starting I/O failed: -6 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.323 Write completed with error (sct=0, sc=8) 00:18:27.595 Write completed with error (sct=0, sc=8) 00:18:27.595 Write completed with error (sct=0, sc=8) 00:18:27.595 Write completed with error (sct=0, sc=8) 00:18:27.595 Write completed with error (sct=0, sc=8) 00:18:27.595 Write completed with error (sct=0, sc=8) 00:18:27.595 Write completed with error (sct=0, sc=8) 00:18:27.595 Write completed with error (sct=0, sc=8) 00:18:27.595 Write completed with error (sct=0, sc=8) 00:18:27.595 Write completed with error (sct=0, sc=8) 00:18:27.595 Write completed with error (sct=0, sc=8) 00:18:27.595 Write completed with error (sct=0, sc=8) 00:18:27.595 Write completed with error (sct=0, sc=8) 00:18:27.595 Write completed with error (sct=0, sc=8) 00:18:27.595 Write completed with error (sct=0, sc=8) 00:18:27.595 Write completed with error (sct=0, sc=8) 00:18:27.595 Write completed with error (sct=0, sc=8) 00:18:27.595 Write completed with error (sct=0, sc=8) 00:18:27.595 Write completed with error (sct=0, sc=8) 00:18:27.595 Write completed with error (sct=0, sc=8) 00:18:27.595 Write completed with error (sct=0, sc=8) 00:18:27.596 Write completed with error (sct=0, sc=8) 00:18:27.596 Write completed with error (sct=0, sc=8) 00:18:27.596 Write completed with error (sct=0, sc=8) 00:18:27.596 Write completed with error (sct=0, sc=8) 00:18:27.596 Write completed with error (sct=0, sc=8) 00:18:27.596 Write completed with error (sct=0, sc=8) 00:18:27.596 Write completed with error (sct=0, sc=8) 00:18:27.596 Write completed with error (sct=0, sc=8) 00:18:27.596 Write completed with error (sct=0, sc=8) 00:18:27.596 Write completed with error (sct=0, sc=8) 00:18:27.596 Write completed with error (sct=0, sc=8) 00:18:27.596 Write completed with error (sct=0, sc=8) 00:18:27.596 Write completed with error (sct=0, sc=8) 00:18:27.596 Write completed with error (sct=0, sc=8) 00:18:27.596 Write completed with error (sct=0, sc=8) 00:18:27.596 Write completed with error (sct=0, sc=8) 00:18:27.596 starting I/O failed: -6 00:18:27.596 Write completed with error (sct=0, sc=8) 00:18:27.596 starting I/O failed: -6 00:18:27.596 Write completed with error (sct=0, sc=8) 00:18:27.596 starting I/O failed: -6 00:18:27.596 Write completed with error (sct=0, sc=8) 00:18:27.596 starting I/O failed: -6 00:18:27.596 Write completed with error (sct=0, sc=8) 00:18:27.596 starting I/O failed: -6 00:18:27.596 [2024-10-08 18:23:40.495975] nvme_ctrlr.c:4536:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Submitting Keep Alive failed 00:18:27.597 Write completed with error (sct=0, sc=8) 00:18:27.597 starting I/O failed: -6 00:18:27.597 Write completed with error (sct=0, sc=8) 00:18:27.597 starting I/O failed: -6 00:18:27.597 Write completed with error (sct=0, sc=8) 00:18:27.597 starting I/O failed: -6 00:18:27.597 Write completed with error (sct=0, sc=8) 00:18:27.597 starting I/O failed: -6 00:18:27.597 Write completed with error (sct=0, sc=8) 00:18:27.597 starting I/O failed: -6 00:18:27.597 Write completed with error (sct=0, sc=8) 00:18:27.597 starting I/O failed: -6 00:18:27.597 Write completed with error (sct=0, sc=8) 00:18:27.597 starting I/O failed: -6 00:18:27.597 Write completed with error (sct=0, sc=8) 00:18:27.597 starting I/O failed: -6 00:18:27.597 Write completed with error (sct=0, sc=8) 00:18:27.597 starting I/O failed: -6 00:18:27.597 Write completed with error (sct=0, sc=8) 00:18:27.597 starting I/O failed: -6 00:18:27.597 Write completed with error (sct=0, sc=8) 00:18:27.597 starting I/O failed: -6 00:18:27.597 Write completed with error (sct=0, sc=8) 00:18:27.597 starting I/O failed: -6 00:18:27.597 Write completed with error (sct=0, sc=8) 00:18:27.597 starting I/O failed: -6 00:18:27.597 Write completed with error (sct=0, sc=8) 00:18:27.597 starting I/O failed: -6 00:18:27.597 Write completed with error (sct=0, sc=8) 00:18:27.598 starting I/O failed: -6 00:18:27.598 Write completed with error (sct=0, sc=8) 00:18:27.598 starting I/O failed: -6 00:18:27.598 Write completed with error (sct=0, sc=8) 00:18:27.598 starting I/O failed: -6 00:18:27.598 Write completed with error (sct=0, sc=8) 00:18:27.598 starting I/O failed: -6 00:18:27.598 Write completed with error (sct=0, sc=8) 00:18:27.598 starting I/O failed: -6 00:18:27.598 Write completed with error (sct=0, sc=8) 00:18:27.598 starting I/O failed: -6 00:18:27.598 Write completed with error (sct=0, sc=8) 00:18:27.598 starting I/O failed: -6 00:18:27.598 Write completed with error (sct=0, sc=8) 00:18:27.598 starting I/O failed: -6 00:18:27.598 Write completed with error (sct=0, sc=8) 00:18:27.598 starting I/O failed: -6 00:18:27.598 Write completed with error (sct=0, sc=8) 00:18:27.598 starting I/O failed: -6 00:18:27.598 Write completed with error (sct=0, sc=8) 00:18:27.598 starting I/O failed: -6 00:18:27.598 Write completed with error (sct=0, sc=8) 00:18:27.598 starting I/O failed: -6 00:18:27.598 Write completed with error (sct=0, sc=8) 00:18:27.598 starting I/O failed: -6 00:18:27.598 Write completed with error (sct=0, sc=8) 00:18:27.598 starting I/O failed: -6 00:18:27.598 Write completed with error (sct=0, sc=8) 00:18:27.598 starting I/O failed: -6 00:18:27.598 Write completed with error (sct=0, sc=8) 00:18:27.598 starting I/O failed: -6 00:18:27.598 Write completed with error (sct=0, sc=8) 00:18:27.598 starting I/O failed: -6 00:18:27.598 Write completed with error (sct=0, sc=8) 00:18:27.598 starting I/O failed: -6 00:18:27.598 Write completed with error (sct=0, sc=8) 00:18:27.598 starting I/O failed: -6 00:18:27.598 Write completed with error (sct=0, sc=8) 00:18:27.598 starting I/O failed: -6 00:18:27.598 Write completed with error (sct=0, sc=8) 00:18:27.598 starting I/O failed: -6 00:18:27.598 Write completed with error (sct=0, sc=8) 00:18:27.598 starting I/O failed: -6 00:18:27.598 Write completed with error (sct=0, sc=8) 00:18:27.598 starting I/O failed: -6 00:18:27.598 Write completed with error (sct=0, sc=8) 00:18:27.598 starting I/O failed: -6 00:18:27.598 Write completed with error (sct=0, sc=8) 00:18:27.598 starting I/O failed: -6 00:18:27.598 Write completed with error (sct=0, sc=8) 00:18:27.598 starting I/O failed: -6 00:18:27.598 Write completed with error (sct=0, sc=8) 00:18:27.598 starting I/O failed: -6 00:18:27.598 Write completed with error (sct=0, sc=8) 00:18:27.598 starting I/O failed: -6 00:18:27.598 Write completed with error (sct=0, sc=8) 00:18:27.598 starting I/O failed: -6 00:18:27.598 Write completed with error (sct=0, sc=8) 00:18:27.599 starting I/O failed: -6 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 starting I/O failed: -6 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 starting I/O failed: -6 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 starting I/O failed: -6 00:18:27.599 Write completed with error (sct=0, sc=8) 00:18:27.599 starting I/O failed: -6 00:18:27.600 [2024-10-08 18:23:40.505840] nvme_ctrlr.c:4536:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Submitting Keep Alive failed 00:18:27.600 Write completed with error (sct=0, sc=8) 00:18:27.600 starting I/O failed: -6 00:18:27.600 Write completed with error (sct=0, sc=8) 00:18:27.600 starting I/O failed: -6 00:18:27.600 Write completed with error (sct=0, sc=8) 00:18:27.600 starting I/O failed: -6 00:18:27.600 Write completed with error (sct=0, sc=8) 00:18:27.600 starting I/O failed: -6 00:18:27.600 Write completed with error (sct=0, sc=8) 00:18:27.600 starting I/O failed: -6 00:18:27.600 Write completed with error (sct=0, sc=8) 00:18:27.600 starting I/O failed: -6 00:18:27.600 Write completed with error (sct=0, sc=8) 00:18:27.600 starting I/O failed: -6 00:18:27.600 Write completed with error (sct=0, sc=8) 00:18:27.600 starting I/O failed: -6 00:18:27.600 Write completed with error (sct=0, sc=8) 00:18:27.600 starting I/O failed: -6 00:18:27.600 Write completed with error (sct=0, sc=8) 00:18:27.600 starting I/O failed: -6 00:18:27.600 Write completed with error (sct=0, sc=8) 00:18:27.600 starting I/O failed: -6 00:18:27.600 Write completed with error (sct=0, sc=8) 00:18:27.600 starting I/O failed: -6 00:18:27.600 Write completed with error (sct=0, sc=8) 00:18:27.600 starting I/O failed: -6 00:18:27.600 Write completed with error (sct=0, sc=8) 00:18:27.600 starting I/O failed: -6 00:18:27.600 Write completed with error (sct=0, sc=8) 00:18:27.600 starting I/O failed: -6 00:18:27.600 Write completed with error (sct=0, sc=8) 00:18:27.600 starting I/O failed: -6 00:18:27.600 Write completed with error (sct=0, sc=8) 00:18:27.600 starting I/O failed: -6 00:18:27.600 Write completed with error (sct=0, sc=8) 00:18:27.600 starting I/O failed: -6 00:18:27.600 Write completed with error (sct=0, sc=8) 00:18:27.600 starting I/O failed: -6 00:18:27.600 Write completed with error (sct=0, sc=8) 00:18:27.600 starting I/O failed: -6 00:18:27.600 Write completed with error (sct=0, sc=8) 00:18:27.600 starting I/O failed: -6 00:18:27.600 Write completed with error (sct=0, sc=8) 00:18:27.600 starting I/O failed: -6 00:18:27.600 Write completed with error (sct=0, sc=8) 00:18:27.600 starting I/O failed: -6 00:18:27.600 Write completed with error (sct=0, sc=8) 00:18:27.600 starting I/O failed: -6 00:18:27.600 Write completed with error (sct=0, sc=8) 00:18:27.600 starting I/O failed: -6 00:18:27.600 Write completed with error (sct=0, sc=8) 00:18:27.600 starting I/O failed: -6 00:18:27.600 Write completed with error (sct=0, sc=8) 00:18:27.600 starting I/O failed: -6 00:18:27.600 Write completed with error (sct=0, sc=8) 00:18:27.600 starting I/O failed: -6 00:18:27.600 Write completed with error (sct=0, sc=8) 00:18:27.600 starting I/O failed: -6 00:18:27.600 Write completed with error (sct=0, sc=8) 00:18:27.600 starting I/O failed: -6 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 starting I/O failed: -6 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 starting I/O failed: -6 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 starting I/O failed: -6 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 starting I/O failed: -6 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 starting I/O failed: -6 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 starting I/O failed: -6 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 starting I/O failed: -6 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 starting I/O failed: -6 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 starting I/O failed: -6 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 starting I/O failed: -6 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 starting I/O failed: -6 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 starting I/O failed: -6 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 starting I/O failed: -6 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 starting I/O failed: -6 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 starting I/O failed: -6 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 starting I/O failed: -6 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.601 Write completed with error (sct=0, sc=8) 00:18:27.602 Write completed with error (sct=0, sc=8) 00:18:27.602 Write completed with error (sct=0, sc=8) 00:18:27.602 Write completed with error (sct=0, sc=8) 00:18:27.602 Write completed with error (sct=0, sc=8) 00:18:27.602 Write completed with error (sct=0, sc=8) 00:18:27.602 Write completed with error (sct=0, sc=8) 00:18:27.602 Write completed with error (sct=0, sc=8) 00:18:27.602 Write completed with error (sct=0, sc=8) 00:18:27.602 Write completed with error (sct=0, sc=8) 00:18:27.602 Write completed with error (sct=0, sc=8) 00:18:27.602 Write completed with error (sct=0, sc=8) 00:18:27.602 Write completed with error (sct=0, sc=8) 00:18:27.602 Write completed with error (sct=0, sc=8) 00:18:27.602 Write completed with error (sct=0, sc=8) 00:18:27.602 Write completed with error (sct=0, sc=8) 00:18:27.602 Write completed with error (sct=0, sc=8) 00:18:27.602 Write completed with error (sct=0, sc=8) 00:18:27.602 Write completed with error (sct=0, sc=8) 00:18:27.602 Write completed with error (sct=0, sc=8) 00:18:27.602 Write completed with error (sct=0, sc=8) 00:18:27.602 Write completed with error (sct=0, sc=8) 00:18:27.602 Write completed with error (sct=0, sc=8) 00:18:27.602 Write completed with error (sct=0, sc=8) 00:18:27.602 Write completed with error (sct=0, sc=8) 00:18:27.602 Write completed with error (sct=0, sc=8) 00:18:27.602 Write completed with error (sct=0, sc=8) 00:18:27.602 Write completed with error (sct=0, sc=8) 00:18:27.602 Write completed with error (sct=0, sc=8) 00:18:27.602 Write completed with error (sct=0, sc=8) 00:18:27.602 Write completed with error (sct=0, sc=8) 00:18:27.602 Write completed with error (sct=0, sc=8) 00:18:27.602 Write completed with error (sct=0, sc=8) 00:18:27.602 Write completed with error (sct=0, sc=8) 00:18:27.603 Write completed with error (sct=0, sc=8) 00:18:27.603 Write completed with error (sct=0, sc=8) 00:18:27.603 Write completed with error (sct=0, sc=8) 00:18:27.603 Write completed with error (sct=0, sc=8) 00:18:27.603 Write completed with error (sct=0, sc=8) 00:18:27.603 Write completed with error (sct=0, sc=8) 00:18:27.603 Write completed with error (sct=0, sc=8) 00:18:27.603 Write completed with error (sct=0, sc=8) 00:18:27.603 Write completed with error (sct=0, sc=8) 00:18:27.603 Write completed with error (sct=0, sc=8) 00:18:27.603 starting I/O failed: -6 00:18:27.603 Write completed with error (sct=0, sc=8) 00:18:27.603 starting I/O failed: -6 00:18:27.603 Write completed with error (sct=0, sc=8) 00:18:27.603 [2024-10-08 18:23:40.517800] nvme_ctrlr.c:4536:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Submitting Keep Alive failed 00:18:27.603 starting I/O failed: -6 00:18:27.603 Write completed with error (sct=0, sc=8) 00:18:27.603 starting I/O failed: -6 00:18:27.603 Write completed with error (sct=0, sc=8) 00:18:27.603 starting I/O failed: -6 00:18:27.603 Write completed with error (sct=0, sc=8) 00:18:27.603 starting I/O failed: -6 00:18:27.603 Write completed with error (sct=0, sc=8) 00:18:27.603 starting I/O failed: -6 00:18:27.603 Write completed with error (sct=0, sc=8) 00:18:27.603 starting I/O failed: -6 00:18:27.603 Write completed with error (sct=0, sc=8) 00:18:27.603 starting I/O failed: -6 00:18:27.603 Write completed with error (sct=0, sc=8) 00:18:27.603 starting I/O failed: -6 00:18:27.603 Write completed with error (sct=0, sc=8) 00:18:27.603 starting I/O failed: -6 00:18:27.603 Write completed with error (sct=0, sc=8) 00:18:27.603 starting I/O failed: -6 00:18:27.603 Write completed with error (sct=0, sc=8) 00:18:27.603 starting I/O failed: -6 00:18:27.603 Write completed with error (sct=0, sc=8) 00:18:27.603 starting I/O failed: -6 00:18:27.603 Write completed with error (sct=0, sc=8) 00:18:27.603 starting I/O failed: -6 00:18:27.603 Write completed with error (sct=0, sc=8) 00:18:27.603 starting I/O failed: -6 00:18:27.603 Write completed with error (sct=0, sc=8) 00:18:27.603 starting I/O failed: -6 00:18:27.604 Write completed with error (sct=0, sc=8) 00:18:27.604 starting I/O failed: -6 00:18:27.604 Write completed with error (sct=0, sc=8) 00:18:27.604 starting I/O failed: -6 00:18:27.604 Write completed with error (sct=0, sc=8) 00:18:27.604 starting I/O failed: -6 00:18:27.604 Write completed with error (sct=0, sc=8) 00:18:27.604 starting I/O failed: -6 00:18:27.604 Write completed with error (sct=0, sc=8) 00:18:27.604 starting I/O failed: -6 00:18:27.604 Write completed with error (sct=0, sc=8) 00:18:27.604 starting I/O failed: -6 00:18:27.604 Write completed with error (sct=0, sc=8) 00:18:27.604 starting I/O failed: -6 00:18:27.604 Write completed with error (sct=0, sc=8) 00:18:27.604 starting I/O failed: -6 00:18:27.604 Write completed with error (sct=0, sc=8) 00:18:27.604 starting I/O failed: -6 00:18:27.604 Write completed with error (sct=0, sc=8) 00:18:27.604 starting I/O failed: -6 00:18:27.604 Write completed with error (sct=0, sc=8) 00:18:27.604 starting I/O failed: -6 00:18:27.604 Write completed with error (sct=0, sc=8) 00:18:27.604 starting I/O failed: -6 00:18:27.604 Write completed with error (sct=0, sc=8) 00:18:27.604 starting I/O failed: -6 00:18:27.604 Write completed with error (sct=0, sc=8) 00:18:27.604 starting I/O failed: -6 00:18:27.604 Write completed with error (sct=0, sc=8) 00:18:27.604 starting I/O failed: -6 00:18:27.604 Write completed with error (sct=0, sc=8) 00:18:27.604 starting I/O failed: -6 00:18:27.604 Write completed with error (sct=0, sc=8) 00:18:27.604 starting I/O failed: -6 00:18:27.604 Write completed with error (sct=0, sc=8) 00:18:27.604 starting I/O failed: -6 00:18:27.604 Write completed with error (sct=0, sc=8) 00:18:27.604 starting I/O failed: -6 00:18:27.604 Write completed with error (sct=0, sc=8) 00:18:27.604 starting I/O failed: -6 00:18:27.604 Write completed with error (sct=0, sc=8) 00:18:27.604 starting I/O failed: -6 00:18:27.604 Write completed with error (sct=0, sc=8) 00:18:27.604 starting I/O failed: -6 00:18:27.605 Write completed with error (sct=0, sc=8) 00:18:27.605 starting I/O failed: -6 00:18:27.605 Write completed with error (sct=0, sc=8) 00:18:27.605 starting I/O failed: -6 00:18:27.605 Write completed with error (sct=0, sc=8) 00:18:27.605 starting I/O failed: -6 00:18:27.605 Write completed with error (sct=0, sc=8) 00:18:27.605 starting I/O failed: -6 00:18:27.605 Write completed with error (sct=0, sc=8) 00:18:27.605 starting I/O failed: -6 00:18:27.605 Write completed with error (sct=0, sc=8) 00:18:27.605 starting I/O failed: -6 00:18:27.605 Write completed with error (sct=0, sc=8) 00:18:27.605 starting I/O failed: -6 00:18:27.605 Write completed with error (sct=0, sc=8) 00:18:27.605 starting I/O failed: -6 00:18:27.605 Write completed with error (sct=0, sc=8) 00:18:27.605 starting I/O failed: -6 00:18:27.605 Write completed with error (sct=0, sc=8) 00:18:27.605 starting I/O failed: -6 00:18:27.605 Write completed with error (sct=0, sc=8) 00:18:27.605 starting I/O failed: -6 00:18:27.605 Write completed with error (sct=0, sc=8) 00:18:27.605 starting I/O failed: -6 00:18:27.605 Write completed with error (sct=0, sc=8) 00:18:27.605 starting I/O failed: -6 00:18:27.605 Write completed with error (sct=0, sc=8) 00:18:27.605 starting I/O failed: -6 00:18:27.605 Write completed with error (sct=0, sc=8) 00:18:27.605 Write completed with error (sct=0, sc=8) 00:18:27.605 Write completed with error (sct=0, sc=8) 00:18:27.605 Write completed with error (sct=0, sc=8) 00:18:27.605 Write completed with error (sct=0, sc=8) 00:18:27.605 Write completed with error (sct=0, sc=8) 00:18:27.605 Write completed with error (sct=0, sc=8) 00:18:27.605 Write completed with error (sct=0, sc=8) 00:18:27.605 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.606 Write completed with error (sct=0, sc=8) 00:18:27.607 Write completed with error (sct=0, sc=8) 00:18:27.607 Write completed with error (sct=0, sc=8) 00:18:27.607 Write completed with error (sct=0, sc=8) 00:18:27.607 Write completed with error (sct=0, sc=8) 00:18:27.607 Write completed with error (sct=0, sc=8) 00:18:27.607 Write completed with error (sct=0, sc=8) 00:18:27.607 Write completed with error (sct=0, sc=8) 00:18:27.607 starting I/O failed: -6 00:18:27.607 Write completed with error (sct=0, sc=8) 00:18:27.607 starting I/O failed: -6 00:18:27.607 [2024-10-08 18:23:40.530466] nvme_ctrlr.c:4536:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Submitting Keep Alive failed 00:18:27.607 Write completed with error (sct=0, sc=8) 00:18:27.607 starting I/O failed: -6 00:18:27.607 Write completed with error (sct=0, sc=8) 00:18:27.607 starting I/O failed: -6 00:18:27.607 Write completed with error (sct=0, sc=8) 00:18:27.607 starting I/O failed: -6 00:18:27.607 Write completed with error (sct=0, sc=8) 00:18:27.607 starting I/O failed: -6 00:18:27.607 Write completed with error (sct=0, sc=8) 00:18:27.607 starting I/O failed: -6 00:18:27.607 Write completed with error (sct=0, sc=8) 00:18:27.607 starting I/O failed: -6 00:18:27.607 Write completed with error (sct=0, sc=8) 00:18:27.607 starting I/O failed: -6 00:18:27.607 Write completed with error (sct=0, sc=8) 00:18:27.607 starting I/O failed: -6 00:18:27.607 Write completed with error (sct=0, sc=8) 00:18:27.607 starting I/O failed: -6 00:18:27.607 Write completed with error (sct=0, sc=8) 00:18:27.607 starting I/O failed: -6 00:18:27.607 Write completed with error (sct=0, sc=8) 00:18:27.607 starting I/O failed: -6 00:18:27.607 Write completed with error (sct=0, sc=8) 00:18:27.607 starting I/O failed: -6 00:18:27.607 Write completed with error (sct=0, sc=8) 00:18:27.607 starting I/O failed: -6 00:18:27.607 Write completed with error (sct=0, sc=8) 00:18:27.607 starting I/O failed: -6 00:18:27.607 Write completed with error (sct=0, sc=8) 00:18:27.607 starting I/O failed: -6 00:18:27.607 Write completed with error (sct=0, sc=8) 00:18:27.607 starting I/O failed: -6 00:18:27.607 Write completed with error (sct=0, sc=8) 00:18:27.608 starting I/O failed: -6 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 starting I/O failed: -6 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 starting I/O failed: -6 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 starting I/O failed: -6 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 starting I/O failed: -6 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 starting I/O failed: -6 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 starting I/O failed: -6 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 starting I/O failed: -6 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 starting I/O failed: -6 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 starting I/O failed: -6 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 starting I/O failed: -6 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 starting I/O failed: -6 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 starting I/O failed: -6 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 starting I/O failed: -6 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 starting I/O failed: -6 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 starting I/O failed: -6 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 starting I/O failed: -6 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 starting I/O failed: -6 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 starting I/O failed: -6 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 starting I/O failed: -6 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 starting I/O failed: -6 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 starting I/O failed: -6 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 starting I/O failed: -6 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 starting I/O failed: -6 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 starting I/O failed: -6 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 starting I/O failed: -6 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.608 Write completed with error (sct=0, sc=8) 00:18:27.609 Write completed with error (sct=0, sc=8) 00:18:27.609 Write completed with error (sct=0, sc=8) 00:18:27.609 Write completed with error (sct=0, sc=8) 00:18:27.609 Write completed with error (sct=0, sc=8) 00:18:27.609 Write completed with error (sct=0, sc=8) 00:18:27.609 Write completed with error (sct=0, sc=8) 00:18:27.609 Write completed with error (sct=0, sc=8) 00:18:27.609 Write completed with error (sct=0, sc=8) 00:18:27.609 Write completed with error (sct=0, sc=8) 00:18:27.609 Write completed with error (sct=0, sc=8) 00:18:27.609 Write completed with error (sct=0, sc=8) 00:18:27.609 Write completed with error (sct=0, sc=8) 00:18:27.609 Write completed with error (sct=0, sc=8) 00:18:27.609 Write completed with error (sct=0, sc=8) 00:18:27.609 Write completed with error (sct=0, sc=8) 00:18:27.609 Write completed with error (sct=0, sc=8) 00:18:27.609 Write completed with error (sct=0, sc=8) 00:18:27.609 Write completed with error (sct=0, sc=8) 00:18:27.609 Write completed with error (sct=0, sc=8) 00:18:27.609 Write completed with error (sct=0, sc=8) 00:18:27.609 Write completed with error (sct=0, sc=8) 00:18:27.609 Write completed with error (sct=0, sc=8) 00:18:27.609 Write completed with error (sct=0, sc=8) 00:18:27.609 Write completed with error (sct=0, sc=8) 00:18:27.609 Write completed with error (sct=0, sc=8) 00:18:27.609 Write completed with error (sct=0, sc=8) 00:18:27.609 Write completed with error (sct=0, sc=8) 00:18:27.609 Write completed with error (sct=0, sc=8) 00:18:27.610 Write completed with error (sct=0, sc=8) 00:18:27.610 Write completed with error (sct=0, sc=8) 00:18:27.610 Write completed with error (sct=0, sc=8) 00:18:27.610 Write completed with error (sct=0, sc=8) 00:18:27.610 Write completed with error (sct=0, sc=8) 00:18:27.610 Write completed with error (sct=0, sc=8) 00:18:27.610 Write completed with error (sct=0, sc=8) 00:18:27.610 Write completed with error (sct=0, sc=8) 00:18:27.610 Write completed with error (sct=0, sc=8) 00:18:27.610 Write completed with error (sct=0, sc=8) 00:18:27.610 Write completed with error (sct=0, sc=8) 00:18:27.610 Write completed with error (sct=0, sc=8) 00:18:27.610 Write completed with error (sct=0, sc=8) 00:18:27.610 Write completed with error (sct=0, sc=8) 00:18:27.610 Write completed with error (sct=0, sc=8) 00:18:27.610 Write completed with error (sct=0, sc=8) 00:18:27.610 Write completed with error (sct=0, sc=8) 00:18:27.610 Write completed with error (sct=0, sc=8) 00:18:27.610 Write completed with error (sct=0, sc=8) 00:18:27.610 Write completed with error (sct=0, sc=8) 00:18:27.610 Write completed with error (sct=0, sc=8) 00:18:27.610 Write completed with error (sct=0, sc=8) 00:18:27.612 Write completed with error (sct=0, sc=8) 00:18:27.612 Write completed with error (sct=0, sc=8) 00:18:27.612 Write completed with error (sct=0, sc=8) 00:18:27.612 Write completed with error (sct=0, sc=8) 00:18:27.612 Write completed with error (sct=0, sc=8) 00:18:27.613 Write completed with error (sct=0, sc=8) 00:18:27.613 Write completed with error (sct=0, sc=8) 00:18:27.613 Write completed with error (sct=0, sc=8) 00:18:27.613 Write completed with error (sct=0, sc=8) 00:18:27.613 Write completed with error (sct=0, sc=8) 00:18:27.613 Write completed with error (sct=0, sc=8) 00:18:27.613 Write completed with error (sct=0, sc=8) 00:18:27.613 Write completed with error (sct=0, sc=8) 00:18:27.613 Write completed with error (sct=0, sc=8) 00:18:27.613 Write completed with error (sct=0, sc=8) 00:18:27.613 [2024-10-08 18:23:40.542746] nvme_ctrlr.c:4536:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:18:27.613 NVMe io qpair process completion error 00:18:27.613 NVMe io qpair process completion error 00:18:27.613 NVMe io qpair process completion error 00:18:28.188 18:23:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3440336 00:18:28.188 18:23:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:18:28.188 18:23:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3440336 00:18:28.188 18:23:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:18:28.188 18:23:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:28.188 18:23:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:18:28.188 18:23:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:28.188 18:23:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 3440336 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 [2024-10-08 18:23:41.547364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:28.450 [2024-10-08 18:23:41.547435] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.450 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 [2024-10-08 18:23:41.549790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:28.451 [2024-10-08 18:23:41.549836] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 [2024-10-08 18:23:41.558570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:28.451 [2024-10-08 18:23:41.558617] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 [2024-10-08 18:23:41.560571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:28.451 [2024-10-08 18:23:41.560616] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.451 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 [2024-10-08 18:23:41.563504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:28.452 [2024-10-08 18:23:41.563548] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 [2024-10-08 18:23:41.565832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:28.452 [2024-10-08 18:23:41.565880] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 [2024-10-08 18:23:41.568130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:28.452 [2024-10-08 18:23:41.568173] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 [2024-10-08 18:23:41.570393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:28.452 [2024-10-08 18:23:41.570436] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 [2024-10-08 18:23:41.572472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 [2024-10-08 18:23:41.572520] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.452 Write completed with error (sct=0, sc=8) 00:18:28.453 Write completed with error (sct=0, sc=8) 00:18:28.453 Write completed with error (sct=0, sc=8) 00:18:28.453 Write completed with error (sct=0, sc=8) 00:18:28.453 Write completed with error (sct=0, sc=8) 00:18:28.453 Write completed with error (sct=0, sc=8) 00:18:28.453 Write completed with error (sct=0, sc=8) 00:18:28.453 Write completed with error (sct=0, sc=8) 00:18:28.453 Write completed with error (sct=0, sc=8) 00:18:28.453 Write completed with error (sct=0, sc=8) 00:18:28.453 Write completed with error (sct=0, sc=8) 00:18:28.453 Write completed with error (sct=0, sc=8) 00:18:28.453 Write completed with error (sct=0, sc=8) 00:18:28.453 Write completed with error (sct=0, sc=8) 00:18:28.453 Write completed with error (sct=0, sc=8) 00:18:28.453 Write completed with error (sct=0, sc=8) 00:18:28.453 Write completed with error (sct=0, sc=8) 00:18:28.453 Write completed with error (sct=0, sc=8) 00:18:28.453 Write completed with error (sct=0, sc=8) 00:18:28.453 Write completed with error (sct=0, sc=8) 00:18:28.453 Write completed with error (sct=0, sc=8) 00:18:28.453 Write completed with error (sct=0, sc=8) 00:18:28.453 Write completed with error (sct=0, sc=8) 00:18:28.453 Write completed with error (sct=0, sc=8) 00:18:28.453 Write completed with error (sct=0, sc=8) 00:18:28.453 Write completed with error (sct=0, sc=8) 00:18:28.453 Write completed with error (sct=0, sc=8) 00:18:28.453 Write completed with error (sct=0, sc=8) 00:18:28.453 Write completed with error (sct=0, sc=8) 00:18:28.453 Write completed with error (sct=0, sc=8) 00:18:28.453 [2024-10-08 18:23:41.611261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:28.453 [2024-10-08 18:23:41.611302] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:18:28.453 Initializing NVMe Controllers 00:18:28.453 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode3 00:18:28.453 Controller IO queue size 128, less than required. 00:18:28.453 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:28.453 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode10 00:18:28.453 Controller IO queue size 128, less than required. 00:18:28.453 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:28.453 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode7 00:18:28.453 Controller IO queue size 128, less than required. 00:18:28.453 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:28.453 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode2 00:18:28.453 Controller IO queue size 128, less than required. 00:18:28.453 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:28.453 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode4 00:18:28.453 Controller IO queue size 128, less than required. 00:18:28.453 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:28.453 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode5 00:18:28.453 Controller IO queue size 128, less than required. 00:18:28.453 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:28.453 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode6 00:18:28.453 Controller IO queue size 128, less than required. 00:18:28.453 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:28.453 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:18:28.453 Controller IO queue size 128, less than required. 00:18:28.453 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:28.453 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode8 00:18:28.453 Controller IO queue size 128, less than required. 00:18:28.453 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:28.453 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode9 00:18:28.453 Controller IO queue size 128, less than required. 00:18:28.453 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:28.453 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:18:28.453 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:18:28.453 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:18:28.453 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:18:28.453 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:18:28.453 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:18:28.453 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:18:28.453 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:28.453 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:18:28.453 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:18:28.453 Initialization complete. Launching workers. 00:18:28.453 ======================================================== 00:18:28.453 Latency(us) 00:18:28.453 Device Information : IOPS MiB/s Average min max 00:18:28.453 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1488.61 63.96 84931.97 117.29 1162752.80 00:18:28.453 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1481.06 63.64 85448.22 102.62 1178001.81 00:18:28.453 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1505.54 64.69 99194.33 113.80 2202796.58 00:18:28.453 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1486.09 63.86 85196.16 117.46 1185986.99 00:18:28.453 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1486.43 63.87 85267.29 113.02 1181286.79 00:18:28.453 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1490.12 64.03 85160.40 111.41 1177316.93 00:18:28.453 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1487.94 63.93 85295.34 117.87 1183090.10 00:18:28.453 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1466.15 63.00 86818.90 117.53 1243739.48 00:18:28.453 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1501.35 64.51 99338.89 113.74 2146601.44 00:18:28.453 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1502.02 64.54 99400.39 114.86 2111564.60 00:18:28.453 ======================================================== 00:18:28.453 Total : 14895.29 640.03 89640.66 102.62 2202796.58 00:18:28.453 00:18:28.453 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:18:28.715 18:23:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:18:28.715 18:23:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:28.715 18:23:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:28.715 18:23:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:28.715 18:23:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:18:28.715 18:23:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:18:28.715 18:23:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:18:28.715 18:23:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:28.715 18:23:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:18:28.715 18:23:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:28.715 18:23:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:18:28.715 18:23:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:28.715 18:23:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:28.715 18:23:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:18:28.715 18:23:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:28.715 18:23:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:28.715 rmmod nvme_rdma 00:18:28.715 rmmod nvme_fabrics 00:18:28.715 18:23:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:28.715 18:23:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:18:28.715 18:23:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:18:28.715 18:23:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@515 -- # '[' -n 3440100 ']' 00:18:28.715 18:23:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # killprocess 3440100 00:18:28.715 18:23:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 3440100 ']' 00:18:28.715 18:23:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 3440100 00:18:28.715 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3440100) - No such process 00:18:28.715 18:23:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 3440100 is not found' 00:18:28.715 Process with pid 3440100 is not found 00:18:28.715 18:23:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:28.715 18:23:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:18:28.715 00:18:28.715 real 0m9.364s 00:18:28.715 user 0m34.908s 00:18:28.715 sys 0m1.511s 00:18:28.715 18:23:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:28.715 18:23:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:18:28.715 ************************************ 00:18:28.715 END TEST nvmf_shutdown_tc4 00:18:28.715 ************************************ 00:18:28.715 18:23:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:18:28.715 00:18:28.715 real 0m36.176s 00:18:28.715 user 1m50.219s 00:18:28.715 sys 0m11.276s 00:18:28.715 18:23:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:28.715 18:23:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:18:28.715 ************************************ 00:18:28.715 END TEST nvmf_shutdown 00:18:28.715 ************************************ 00:18:28.715 18:23:41 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:28.715 00:18:28.715 real 8m5.957s 00:18:28.715 user 19m25.175s 00:18:28.715 sys 2m30.056s 00:18:28.715 18:23:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:28.715 18:23:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:28.715 ************************************ 00:18:28.715 END TEST nvmf_target_extra 00:18:28.715 ************************************ 00:18:28.715 18:23:41 nvmf_rdma -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:18:28.715 18:23:41 nvmf_rdma -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:28.715 18:23:41 nvmf_rdma -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:28.715 18:23:41 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:18:28.975 ************************************ 00:18:28.975 START TEST nvmf_host 00:18:28.975 ************************************ 00:18:28.975 18:23:41 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:18:28.975 * Looking for test storage... 00:18:28.975 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:18:28.975 18:23:42 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:28.975 18:23:42 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:18:28.975 18:23:42 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:28.975 18:23:42 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:28.975 18:23:42 nvmf_rdma.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:28.975 18:23:42 nvmf_rdma.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:28.975 18:23:42 nvmf_rdma.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:28.975 18:23:42 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:18:28.975 18:23:42 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:18:28.975 18:23:42 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:18:28.975 18:23:42 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:18:28.975 18:23:42 nvmf_rdma.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:18:28.975 18:23:42 nvmf_rdma.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:18:28.975 18:23:42 nvmf_rdma.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:18:28.975 18:23:42 nvmf_rdma.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:28.975 18:23:42 nvmf_rdma.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:18:28.975 18:23:42 nvmf_rdma.nvmf_host -- scripts/common.sh@345 -- # : 1 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # return 0 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:28.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:28.976 --rc genhtml_branch_coverage=1 00:18:28.976 --rc genhtml_function_coverage=1 00:18:28.976 --rc genhtml_legend=1 00:18:28.976 --rc geninfo_all_blocks=1 00:18:28.976 --rc geninfo_unexecuted_blocks=1 00:18:28.976 00:18:28.976 ' 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:28.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:28.976 --rc genhtml_branch_coverage=1 00:18:28.976 --rc genhtml_function_coverage=1 00:18:28.976 --rc genhtml_legend=1 00:18:28.976 --rc geninfo_all_blocks=1 00:18:28.976 --rc geninfo_unexecuted_blocks=1 00:18:28.976 00:18:28.976 ' 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:28.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:28.976 --rc genhtml_branch_coverage=1 00:18:28.976 --rc genhtml_function_coverage=1 00:18:28.976 --rc genhtml_legend=1 00:18:28.976 --rc geninfo_all_blocks=1 00:18:28.976 --rc geninfo_unexecuted_blocks=1 00:18:28.976 00:18:28.976 ' 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:28.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:28.976 --rc genhtml_branch_coverage=1 00:18:28.976 --rc genhtml_function_coverage=1 00:18:28.976 --rc genhtml_legend=1 00:18:28.976 --rc geninfo_all_blocks=1 00:18:28.976 --rc geninfo_unexecuted_blocks=1 00:18:28.976 00:18:28.976 ' 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- paths/export.sh@5 -- # export PATH 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:28.976 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:28.976 18:23:42 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.236 ************************************ 00:18:29.236 START TEST nvmf_multicontroller 00:18:29.236 ************************************ 00:18:29.236 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:18:29.236 * Looking for test storage... 00:18:29.236 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:29.236 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:29.236 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:29.236 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lcov --version 00:18:29.236 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:29.236 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:29.236 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:29.236 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:29.236 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:18:29.236 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:18:29.236 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:18:29.236 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:18:29.236 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:18:29.236 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:18:29.236 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:18:29.236 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:29.236 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:18:29.236 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:18:29.236 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:29.236 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:29.236 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:18:29.236 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:29.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:29.237 --rc genhtml_branch_coverage=1 00:18:29.237 --rc genhtml_function_coverage=1 00:18:29.237 --rc genhtml_legend=1 00:18:29.237 --rc geninfo_all_blocks=1 00:18:29.237 --rc geninfo_unexecuted_blocks=1 00:18:29.237 00:18:29.237 ' 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:29.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:29.237 --rc genhtml_branch_coverage=1 00:18:29.237 --rc genhtml_function_coverage=1 00:18:29.237 --rc genhtml_legend=1 00:18:29.237 --rc geninfo_all_blocks=1 00:18:29.237 --rc geninfo_unexecuted_blocks=1 00:18:29.237 00:18:29.237 ' 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:29.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:29.237 --rc genhtml_branch_coverage=1 00:18:29.237 --rc genhtml_function_coverage=1 00:18:29.237 --rc genhtml_legend=1 00:18:29.237 --rc geninfo_all_blocks=1 00:18:29.237 --rc geninfo_unexecuted_blocks=1 00:18:29.237 00:18:29.237 ' 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:29.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:29.237 --rc genhtml_branch_coverage=1 00:18:29.237 --rc genhtml_function_coverage=1 00:18:29.237 --rc genhtml_legend=1 00:18:29.237 --rc geninfo_all_blocks=1 00:18:29.237 --rc geninfo_unexecuted_blocks=1 00:18:29.237 00:18:29.237 ' 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:29.237 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:18:29.237 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:18:29.237 00:18:29.237 real 0m0.233s 00:18:29.237 user 0m0.127s 00:18:29.237 sys 0m0.122s 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:29.237 18:23:42 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:29.237 ************************************ 00:18:29.237 END TEST nvmf_multicontroller 00:18:29.237 ************************************ 00:18:29.497 18:23:42 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:18:29.497 18:23:42 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:29.497 18:23:42 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:29.497 18:23:42 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.497 ************************************ 00:18:29.497 START TEST nvmf_aer 00:18:29.497 ************************************ 00:18:29.497 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:18:29.497 * Looking for test storage... 00:18:29.497 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:29.497 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:29.497 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lcov --version 00:18:29.497 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:29.497 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:29.497 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:29.497 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:29.497 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:29.497 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:18:29.497 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:18:29.497 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:18:29.497 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:18:29.497 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:18:29.497 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:18:29.497 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:18:29.497 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:29.497 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:18:29.497 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:18:29.497 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:29.497 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:29.497 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:18:29.497 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:18:29.497 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:29.497 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:18:29.497 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:18:29.757 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:18:29.757 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:18:29.757 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:29.757 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:18:29.757 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:18:29.757 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:29.757 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:29.757 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:18:29.757 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:29.757 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:29.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:29.757 --rc genhtml_branch_coverage=1 00:18:29.757 --rc genhtml_function_coverage=1 00:18:29.757 --rc genhtml_legend=1 00:18:29.757 --rc geninfo_all_blocks=1 00:18:29.757 --rc geninfo_unexecuted_blocks=1 00:18:29.757 00:18:29.757 ' 00:18:29.757 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:29.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:29.757 --rc genhtml_branch_coverage=1 00:18:29.757 --rc genhtml_function_coverage=1 00:18:29.757 --rc genhtml_legend=1 00:18:29.757 --rc geninfo_all_blocks=1 00:18:29.757 --rc geninfo_unexecuted_blocks=1 00:18:29.757 00:18:29.757 ' 00:18:29.757 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:29.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:29.757 --rc genhtml_branch_coverage=1 00:18:29.757 --rc genhtml_function_coverage=1 00:18:29.757 --rc genhtml_legend=1 00:18:29.757 --rc geninfo_all_blocks=1 00:18:29.757 --rc geninfo_unexecuted_blocks=1 00:18:29.757 00:18:29.757 ' 00:18:29.757 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:29.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:29.757 --rc genhtml_branch_coverage=1 00:18:29.757 --rc genhtml_function_coverage=1 00:18:29.757 --rc genhtml_legend=1 00:18:29.757 --rc geninfo_all_blocks=1 00:18:29.757 --rc geninfo_unexecuted_blocks=1 00:18:29.757 00:18:29.757 ' 00:18:29.757 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:29.757 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:18:29.757 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:29.757 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:29.757 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:29.757 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:29.757 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:29.758 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:18:29.758 18:23:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:18:36.332 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:18:36.332 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:36.332 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:18:36.333 Found net devices under 0000:18:00.0: mlx_0_0 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:18:36.333 Found net devices under 0000:18:00.1: mlx_0_1 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # is_hw=yes 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # rdma_device_init 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # uname 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@528 -- # allocate_nic_ips 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:36.333 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:36.333 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:18:36.333 altname enp24s0f0np0 00:18:36.333 altname ens785f0np0 00:18:36.333 inet 192.168.100.8/24 scope global mlx_0_0 00:18:36.333 valid_lft forever preferred_lft forever 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:36.333 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:36.333 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:18:36.333 altname enp24s0f1np1 00:18:36.333 altname ens785f1np1 00:18:36.333 inet 192.168.100.9/24 scope global mlx_0_1 00:18:36.333 valid_lft forever preferred_lft forever 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # return 0 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:18:36.333 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:18:36.593 192.168.100.9' 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:18:36.593 192.168.100.9' 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # head -n 1 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:18:36.593 192.168.100.9' 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # tail -n +2 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # head -n 1 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # nvmfpid=3444461 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # waitforlisten 3444461 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 3444461 ']' 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:36.593 18:23:49 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:36.593 [2024-10-08 18:23:49.671961] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:18:36.593 [2024-10-08 18:23:49.672036] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:36.593 [2024-10-08 18:23:49.757719] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:36.853 [2024-10-08 18:23:49.846599] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:36.853 [2024-10-08 18:23:49.846647] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:36.853 [2024-10-08 18:23:49.846656] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:36.853 [2024-10-08 18:23:49.846664] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:36.853 [2024-10-08 18:23:49.846671] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:36.853 [2024-10-08 18:23:49.848083] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:36.853 [2024-10-08 18:23:49.848186] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:18:36.853 [2024-10-08 18:23:49.848286] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.853 [2024-10-08 18:23:49.848288] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:18:37.421 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:37.421 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:18:37.421 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:37.421 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:37.421 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:37.421 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:37.421 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:37.421 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.421 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:37.680 [2024-10-08 18:23:50.597913] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x5922e0/0x5967d0) succeed. 00:18:37.680 [2024-10-08 18:23:50.608395] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x593920/0x5d7e70) succeed. 00:18:37.680 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.681 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:18:37.681 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.681 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:37.681 Malloc0 00:18:37.681 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.681 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:18:37.681 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.681 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:37.681 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.681 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:37.681 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.681 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:37.681 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.681 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:37.681 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.681 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:37.681 [2024-10-08 18:23:50.776126] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:37.681 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.681 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:18:37.681 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.681 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:37.681 [ 00:18:37.681 { 00:18:37.681 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:37.681 "subtype": "Discovery", 00:18:37.681 "listen_addresses": [], 00:18:37.681 "allow_any_host": true, 00:18:37.681 "hosts": [] 00:18:37.681 }, 00:18:37.681 { 00:18:37.681 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.681 "subtype": "NVMe", 00:18:37.681 "listen_addresses": [ 00:18:37.681 { 00:18:37.681 "trtype": "RDMA", 00:18:37.681 "adrfam": "IPv4", 00:18:37.681 "traddr": "192.168.100.8", 00:18:37.681 "trsvcid": "4420" 00:18:37.681 } 00:18:37.681 ], 00:18:37.681 "allow_any_host": true, 00:18:37.681 "hosts": [], 00:18:37.681 "serial_number": "SPDK00000000000001", 00:18:37.681 "model_number": "SPDK bdev Controller", 00:18:37.681 "max_namespaces": 2, 00:18:37.681 "min_cntlid": 1, 00:18:37.681 "max_cntlid": 65519, 00:18:37.681 "namespaces": [ 00:18:37.681 { 00:18:37.681 "nsid": 1, 00:18:37.681 "bdev_name": "Malloc0", 00:18:37.681 "name": "Malloc0", 00:18:37.681 "nguid": "B5185B834BB840ED8707A0D179B03E0D", 00:18:37.681 "uuid": "b5185b83-4bb8-40ed-8707-a0d179b03e0d" 00:18:37.681 } 00:18:37.681 ] 00:18:37.681 } 00:18:37.681 ] 00:18:37.681 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.681 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:37.681 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:18:37.681 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3444662 00:18:37.681 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:18:37.681 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:18:37.681 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:18:37.681 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:37.681 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:18:37.681 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:18:37.681 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:18:37.941 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:37.941 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:18:37.941 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:18:37.941 18:23:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:18:37.941 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:37.941 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:37.941 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:18:37.941 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:18:37.941 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.941 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:37.941 Malloc1 00:18:37.941 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.941 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:18:37.941 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.941 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:37.941 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.941 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:18:37.941 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.941 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:37.941 [ 00:18:37.941 { 00:18:37.941 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:37.941 "subtype": "Discovery", 00:18:37.941 "listen_addresses": [], 00:18:37.941 "allow_any_host": true, 00:18:37.941 "hosts": [] 00:18:37.941 }, 00:18:37.941 { 00:18:37.941 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.941 "subtype": "NVMe", 00:18:37.941 "listen_addresses": [ 00:18:37.941 { 00:18:37.941 "trtype": "RDMA", 00:18:37.941 "adrfam": "IPv4", 00:18:37.941 "traddr": "192.168.100.8", 00:18:37.941 "trsvcid": "4420" 00:18:37.941 } 00:18:37.941 ], 00:18:37.941 "allow_any_host": true, 00:18:37.941 "hosts": [], 00:18:37.941 "serial_number": "SPDK00000000000001", 00:18:37.941 "model_number": "SPDK bdev Controller", 00:18:37.941 "max_namespaces": 2, 00:18:37.941 "min_cntlid": 1, 00:18:37.941 "max_cntlid": 65519, 00:18:37.941 "namespaces": [ 00:18:37.941 { 00:18:37.941 "nsid": 1, 00:18:37.941 "bdev_name": "Malloc0", 00:18:37.941 "name": "Malloc0", 00:18:37.942 "nguid": "B5185B834BB840ED8707A0D179B03E0D", 00:18:37.942 "uuid": "b5185b83-4bb8-40ed-8707-a0d179b03e0d" 00:18:37.942 }, 00:18:37.942 { 00:18:37.942 "nsid": 2, 00:18:37.942 "bdev_name": "Malloc1", 00:18:37.942 "name": "Malloc1", 00:18:37.942 "nguid": "1F1F0E4342AB4E49A90BAF2DE5FD5D8F", 00:18:37.942 "uuid": "1f1f0e43-42ab-4e49-a90b-af2de5fd5d8f" 00:18:37.942 } 00:18:37.942 ] 00:18:37.942 } 00:18:37.942 ] 00:18:37.942 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.942 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3444662 00:18:37.942 Asynchronous Event Request test 00:18:37.942 Attaching to 192.168.100.8 00:18:37.942 Attached to 192.168.100.8 00:18:37.942 Registering asynchronous event callbacks... 00:18:37.942 Starting namespace attribute notice tests for all controllers... 00:18:37.942 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:37.942 aer_cb - Changed Namespace 00:18:37.942 Cleaning up... 00:18:38.202 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:38.202 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.202 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:38.202 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.202 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:18:38.202 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.202 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:38.202 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.202 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:38.202 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.202 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:38.202 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.202 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:18:38.202 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:18:38.202 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:38.202 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:18:38.202 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:38.202 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:38.202 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:18:38.202 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:38.202 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:38.202 rmmod nvme_rdma 00:18:38.202 rmmod nvme_fabrics 00:18:38.202 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:38.202 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:18:38.202 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:18:38.202 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@515 -- # '[' -n 3444461 ']' 00:18:38.202 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # killprocess 3444461 00:18:38.202 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 3444461 ']' 00:18:38.202 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 3444461 00:18:38.202 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:18:38.202 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:38.202 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3444461 00:18:38.202 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:38.202 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:38.202 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3444461' 00:18:38.202 killing process with pid 3444461 00:18:38.202 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 3444461 00:18:38.202 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 3444461 00:18:38.461 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:38.461 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:18:38.461 00:18:38.461 real 0m9.134s 00:18:38.461 user 0m8.919s 00:18:38.461 sys 0m5.934s 00:18:38.461 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:38.461 18:23:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:38.461 ************************************ 00:18:38.461 END TEST nvmf_aer 00:18:38.461 ************************************ 00:18:38.721 18:23:51 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:18:38.721 18:23:51 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:38.721 18:23:51 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:38.721 18:23:51 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.721 ************************************ 00:18:38.721 START TEST nvmf_async_init 00:18:38.721 ************************************ 00:18:38.721 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:18:38.721 * Looking for test storage... 00:18:38.721 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:38.721 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:38.722 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lcov --version 00:18:38.722 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:38.722 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:38.722 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:38.722 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:38.722 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:38.722 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:18:38.722 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:18:38.722 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:18:38.722 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:18:38.722 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:18:38.722 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:18:38.722 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:18:38.722 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:38.722 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:18:38.722 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:18:38.722 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:38.722 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:38.722 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:18:38.722 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:18:38.722 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:38.722 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:18:38.982 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:18:38.982 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:18:38.982 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:18:38.982 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:38.982 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:18:38.982 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:18:38.982 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:38.982 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:38.982 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:18:38.982 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:38.982 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:38.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.982 --rc genhtml_branch_coverage=1 00:18:38.982 --rc genhtml_function_coverage=1 00:18:38.982 --rc genhtml_legend=1 00:18:38.982 --rc geninfo_all_blocks=1 00:18:38.982 --rc geninfo_unexecuted_blocks=1 00:18:38.982 00:18:38.982 ' 00:18:38.982 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:38.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.982 --rc genhtml_branch_coverage=1 00:18:38.982 --rc genhtml_function_coverage=1 00:18:38.982 --rc genhtml_legend=1 00:18:38.982 --rc geninfo_all_blocks=1 00:18:38.982 --rc geninfo_unexecuted_blocks=1 00:18:38.982 00:18:38.982 ' 00:18:38.982 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:38.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.982 --rc genhtml_branch_coverage=1 00:18:38.982 --rc genhtml_function_coverage=1 00:18:38.982 --rc genhtml_legend=1 00:18:38.982 --rc geninfo_all_blocks=1 00:18:38.982 --rc geninfo_unexecuted_blocks=1 00:18:38.982 00:18:38.982 ' 00:18:38.982 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:38.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.982 --rc genhtml_branch_coverage=1 00:18:38.982 --rc genhtml_function_coverage=1 00:18:38.982 --rc genhtml_legend=1 00:18:38.982 --rc geninfo_all_blocks=1 00:18:38.982 --rc geninfo_unexecuted_blocks=1 00:18:38.982 00:18:38.982 ' 00:18:38.982 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:38.982 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:18:38.982 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:38.982 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:38.982 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:38.982 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:38.982 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:38.982 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:38.982 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:38.982 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:38.982 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:38.982 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:38.982 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:18:38.982 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:18:38.982 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:38.982 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:38.982 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:38.982 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:38.982 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:38.982 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:18:38.982 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:38.982 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:38.982 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:38.982 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.983 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.983 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.983 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:18:38.983 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.983 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:18:38.983 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:38.983 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:38.983 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:38.983 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:38.983 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:38.983 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:38.983 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:38.983 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:38.983 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:38.983 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:38.983 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:18:38.983 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:18:38.983 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:18:38.983 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:18:38.983 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:18:38.983 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:18:38.983 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=4fa35e10dbab4789ab519697526162bc 00:18:38.983 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:18:38.983 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:18:38.983 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:38.983 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:38.983 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:38.983 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:38.983 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:38.983 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:38.983 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:38.983 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:38.983 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:38.983 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:18:38.983 18:23:51 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:45.555 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:18:45.556 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:18:45.556 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:18:45.556 Found net devices under 0000:18:00.0: mlx_0_0 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:18:45.556 Found net devices under 0000:18:00.1: mlx_0_1 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # is_hw=yes 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # rdma_device_init 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # uname 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@528 -- # allocate_nic_ips 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:45.556 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:45.816 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:45.816 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:18:45.816 altname enp24s0f0np0 00:18:45.816 altname ens785f0np0 00:18:45.816 inet 192.168.100.8/24 scope global mlx_0_0 00:18:45.816 valid_lft forever preferred_lft forever 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:45.816 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:45.816 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:18:45.816 altname enp24s0f1np1 00:18:45.816 altname ens785f1np1 00:18:45.816 inet 192.168.100.9/24 scope global mlx_0_1 00:18:45.816 valid_lft forever preferred_lft forever 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # return 0 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:18:45.816 192.168.100.9' 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:18:45.816 192.168.100.9' 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # head -n 1 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:18:45.816 192.168.100.9' 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # tail -n +2 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # head -n 1 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # nvmfpid=3447715 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # waitforlisten 3447715 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 3447715 ']' 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:45.816 18:23:58 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:45.816 [2024-10-08 18:23:58.913245] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:18:45.816 [2024-10-08 18:23:58.913307] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:46.076 [2024-10-08 18:23:58.997450] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.076 [2024-10-08 18:23:59.086129] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:46.076 [2024-10-08 18:23:59.086168] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:46.076 [2024-10-08 18:23:59.086178] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:46.076 [2024-10-08 18:23:59.086187] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:46.076 [2024-10-08 18:23:59.086194] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:46.076 [2024-10-08 18:23:59.086655] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.644 18:23:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:46.644 18:23:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:18:46.644 18:23:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:46.644 18:23:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:46.644 18:23:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:46.644 18:23:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:46.644 18:23:59 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:18:46.644 18:23:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.644 18:23:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:46.904 [2024-10-08 18:23:59.837925] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f8f0e0/0x1f935d0) succeed. 00:18:46.904 [2024-10-08 18:23:59.846721] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f905e0/0x1fd4c70) succeed. 00:18:46.904 18:23:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.904 18:23:59 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:18:46.904 18:23:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.904 18:23:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:46.904 null0 00:18:46.904 18:23:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.904 18:23:59 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:18:46.904 18:23:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.904 18:23:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:46.904 18:23:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.904 18:23:59 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:18:46.904 18:23:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.904 18:23:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:46.904 18:23:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.904 18:23:59 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 4fa35e10dbab4789ab519697526162bc 00:18:46.904 18:23:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.904 18:23:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:46.904 18:23:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.904 18:23:59 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:18:46.904 18:23:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.904 18:23:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:46.904 [2024-10-08 18:23:59.944986] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:46.904 18:23:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.904 18:23:59 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:18:46.904 18:23:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.904 18:23:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:46.904 nvme0n1 00:18:46.904 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.904 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:18:46.904 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.904 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:46.904 [ 00:18:46.904 { 00:18:46.904 "name": "nvme0n1", 00:18:46.904 "aliases": [ 00:18:46.904 "4fa35e10-dbab-4789-ab51-9697526162bc" 00:18:46.904 ], 00:18:46.904 "product_name": "NVMe disk", 00:18:46.904 "block_size": 512, 00:18:46.904 "num_blocks": 2097152, 00:18:46.904 "uuid": "4fa35e10-dbab-4789-ab51-9697526162bc", 00:18:46.904 "numa_id": 0, 00:18:46.904 "assigned_rate_limits": { 00:18:46.904 "rw_ios_per_sec": 0, 00:18:46.904 "rw_mbytes_per_sec": 0, 00:18:46.904 "r_mbytes_per_sec": 0, 00:18:46.904 "w_mbytes_per_sec": 0 00:18:46.904 }, 00:18:46.904 "claimed": false, 00:18:46.904 "zoned": false, 00:18:46.904 "supported_io_types": { 00:18:46.904 "read": true, 00:18:46.904 "write": true, 00:18:46.904 "unmap": false, 00:18:46.904 "flush": true, 00:18:46.904 "reset": true, 00:18:46.904 "nvme_admin": true, 00:18:46.904 "nvme_io": true, 00:18:46.904 "nvme_io_md": false, 00:18:46.904 "write_zeroes": true, 00:18:46.904 "zcopy": false, 00:18:46.904 "get_zone_info": false, 00:18:46.904 "zone_management": false, 00:18:46.904 "zone_append": false, 00:18:46.904 "compare": true, 00:18:46.904 "compare_and_write": true, 00:18:46.904 "abort": true, 00:18:46.904 "seek_hole": false, 00:18:46.904 "seek_data": false, 00:18:46.904 "copy": true, 00:18:46.904 "nvme_iov_md": false 00:18:46.904 }, 00:18:46.904 "memory_domains": [ 00:18:46.904 { 00:18:46.904 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:18:46.904 "dma_device_type": 0 00:18:46.904 } 00:18:46.904 ], 00:18:46.904 "driver_specific": { 00:18:46.904 "nvme": [ 00:18:46.904 { 00:18:46.905 "trid": { 00:18:46.905 "trtype": "RDMA", 00:18:46.905 "adrfam": "IPv4", 00:18:46.905 "traddr": "192.168.100.8", 00:18:46.905 "trsvcid": "4420", 00:18:46.905 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:46.905 }, 00:18:46.905 "ctrlr_data": { 00:18:46.905 "cntlid": 1, 00:18:46.905 "vendor_id": "0x8086", 00:18:46.905 "model_number": "SPDK bdev Controller", 00:18:46.905 "serial_number": "00000000000000000000", 00:18:46.905 "firmware_revision": "25.01", 00:18:46.905 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:46.905 "oacs": { 00:18:46.905 "security": 0, 00:18:46.905 "format": 0, 00:18:46.905 "firmware": 0, 00:18:46.905 "ns_manage": 0 00:18:46.905 }, 00:18:46.905 "multi_ctrlr": true, 00:18:46.905 "ana_reporting": false 00:18:46.905 }, 00:18:46.905 "vs": { 00:18:46.905 "nvme_version": "1.3" 00:18:46.905 }, 00:18:46.905 "ns_data": { 00:18:46.905 "id": 1, 00:18:46.905 "can_share": true 00:18:46.905 } 00:18:46.905 } 00:18:46.905 ], 00:18:46.905 "mp_policy": "active_passive" 00:18:46.905 } 00:18:46.905 } 00:18:46.905 ] 00:18:46.905 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.905 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:18:46.905 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.905 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:46.905 [2024-10-08 18:24:00.065499] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:47.164 [2024-10-08 18:24:00.089436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:47.164 [2024-10-08 18:24:00.111364] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:47.164 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.164 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:18:47.164 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.164 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:47.164 [ 00:18:47.164 { 00:18:47.164 "name": "nvme0n1", 00:18:47.164 "aliases": [ 00:18:47.164 "4fa35e10-dbab-4789-ab51-9697526162bc" 00:18:47.164 ], 00:18:47.164 "product_name": "NVMe disk", 00:18:47.164 "block_size": 512, 00:18:47.164 "num_blocks": 2097152, 00:18:47.164 "uuid": "4fa35e10-dbab-4789-ab51-9697526162bc", 00:18:47.164 "numa_id": 0, 00:18:47.164 "assigned_rate_limits": { 00:18:47.164 "rw_ios_per_sec": 0, 00:18:47.164 "rw_mbytes_per_sec": 0, 00:18:47.164 "r_mbytes_per_sec": 0, 00:18:47.164 "w_mbytes_per_sec": 0 00:18:47.164 }, 00:18:47.164 "claimed": false, 00:18:47.164 "zoned": false, 00:18:47.164 "supported_io_types": { 00:18:47.164 "read": true, 00:18:47.164 "write": true, 00:18:47.164 "unmap": false, 00:18:47.164 "flush": true, 00:18:47.164 "reset": true, 00:18:47.164 "nvme_admin": true, 00:18:47.164 "nvme_io": true, 00:18:47.164 "nvme_io_md": false, 00:18:47.164 "write_zeroes": true, 00:18:47.164 "zcopy": false, 00:18:47.164 "get_zone_info": false, 00:18:47.164 "zone_management": false, 00:18:47.164 "zone_append": false, 00:18:47.164 "compare": true, 00:18:47.164 "compare_and_write": true, 00:18:47.164 "abort": true, 00:18:47.164 "seek_hole": false, 00:18:47.164 "seek_data": false, 00:18:47.164 "copy": true, 00:18:47.164 "nvme_iov_md": false 00:18:47.164 }, 00:18:47.164 "memory_domains": [ 00:18:47.164 { 00:18:47.164 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:18:47.164 "dma_device_type": 0 00:18:47.164 } 00:18:47.164 ], 00:18:47.164 "driver_specific": { 00:18:47.164 "nvme": [ 00:18:47.164 { 00:18:47.164 "trid": { 00:18:47.164 "trtype": "RDMA", 00:18:47.164 "adrfam": "IPv4", 00:18:47.164 "traddr": "192.168.100.8", 00:18:47.164 "trsvcid": "4420", 00:18:47.164 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:47.164 }, 00:18:47.164 "ctrlr_data": { 00:18:47.164 "cntlid": 2, 00:18:47.164 "vendor_id": "0x8086", 00:18:47.164 "model_number": "SPDK bdev Controller", 00:18:47.164 "serial_number": "00000000000000000000", 00:18:47.164 "firmware_revision": "25.01", 00:18:47.164 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:47.164 "oacs": { 00:18:47.164 "security": 0, 00:18:47.164 "format": 0, 00:18:47.165 "firmware": 0, 00:18:47.165 "ns_manage": 0 00:18:47.165 }, 00:18:47.165 "multi_ctrlr": true, 00:18:47.165 "ana_reporting": false 00:18:47.165 }, 00:18:47.165 "vs": { 00:18:47.165 "nvme_version": "1.3" 00:18:47.165 }, 00:18:47.165 "ns_data": { 00:18:47.165 "id": 1, 00:18:47.165 "can_share": true 00:18:47.165 } 00:18:47.165 } 00:18:47.165 ], 00:18:47.165 "mp_policy": "active_passive" 00:18:47.165 } 00:18:47.165 } 00:18:47.165 ] 00:18:47.165 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.165 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:47.165 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.165 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:47.165 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.165 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:18:47.165 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.jKuOBbbLrY 00:18:47.165 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:47.165 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.jKuOBbbLrY 00:18:47.165 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.jKuOBbbLrY 00:18:47.165 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.165 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:47.165 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.165 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:18:47.165 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.165 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:47.165 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.165 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:18:47.165 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.165 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:47.165 [2024-10-08 18:24:00.207124] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:18:47.165 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.165 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:18:47.165 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.165 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:47.165 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.165 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:47.165 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.165 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:47.165 [2024-10-08 18:24:00.231180] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:47.165 nvme0n1 00:18:47.165 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.165 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:18:47.165 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.165 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:47.165 [ 00:18:47.165 { 00:18:47.165 "name": "nvme0n1", 00:18:47.165 "aliases": [ 00:18:47.165 "4fa35e10-dbab-4789-ab51-9697526162bc" 00:18:47.165 ], 00:18:47.165 "product_name": "NVMe disk", 00:18:47.165 "block_size": 512, 00:18:47.165 "num_blocks": 2097152, 00:18:47.165 "uuid": "4fa35e10-dbab-4789-ab51-9697526162bc", 00:18:47.165 "numa_id": 0, 00:18:47.165 "assigned_rate_limits": { 00:18:47.165 "rw_ios_per_sec": 0, 00:18:47.165 "rw_mbytes_per_sec": 0, 00:18:47.165 "r_mbytes_per_sec": 0, 00:18:47.165 "w_mbytes_per_sec": 0 00:18:47.165 }, 00:18:47.165 "claimed": false, 00:18:47.165 "zoned": false, 00:18:47.165 "supported_io_types": { 00:18:47.165 "read": true, 00:18:47.165 "write": true, 00:18:47.165 "unmap": false, 00:18:47.165 "flush": true, 00:18:47.165 "reset": true, 00:18:47.165 "nvme_admin": true, 00:18:47.165 "nvme_io": true, 00:18:47.165 "nvme_io_md": false, 00:18:47.165 "write_zeroes": true, 00:18:47.165 "zcopy": false, 00:18:47.165 "get_zone_info": false, 00:18:47.165 "zone_management": false, 00:18:47.165 "zone_append": false, 00:18:47.165 "compare": true, 00:18:47.165 "compare_and_write": true, 00:18:47.165 "abort": true, 00:18:47.165 "seek_hole": false, 00:18:47.165 "seek_data": false, 00:18:47.165 "copy": true, 00:18:47.165 "nvme_iov_md": false 00:18:47.165 }, 00:18:47.165 "memory_domains": [ 00:18:47.165 { 00:18:47.165 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:18:47.165 "dma_device_type": 0 00:18:47.165 } 00:18:47.165 ], 00:18:47.165 "driver_specific": { 00:18:47.165 "nvme": [ 00:18:47.165 { 00:18:47.165 "trid": { 00:18:47.165 "trtype": "RDMA", 00:18:47.165 "adrfam": "IPv4", 00:18:47.165 "traddr": "192.168.100.8", 00:18:47.165 "trsvcid": "4421", 00:18:47.165 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:47.165 }, 00:18:47.165 "ctrlr_data": { 00:18:47.165 "cntlid": 3, 00:18:47.165 "vendor_id": "0x8086", 00:18:47.165 "model_number": "SPDK bdev Controller", 00:18:47.165 "serial_number": "00000000000000000000", 00:18:47.165 "firmware_revision": "25.01", 00:18:47.165 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:47.165 "oacs": { 00:18:47.165 "security": 0, 00:18:47.165 "format": 0, 00:18:47.165 "firmware": 0, 00:18:47.165 "ns_manage": 0 00:18:47.165 }, 00:18:47.165 "multi_ctrlr": true, 00:18:47.165 "ana_reporting": false 00:18:47.165 }, 00:18:47.165 "vs": { 00:18:47.165 "nvme_version": "1.3" 00:18:47.165 }, 00:18:47.165 "ns_data": { 00:18:47.165 "id": 1, 00:18:47.165 "can_share": true 00:18:47.165 } 00:18:47.165 } 00:18:47.165 ], 00:18:47.165 "mp_policy": "active_passive" 00:18:47.165 } 00:18:47.165 } 00:18:47.165 ] 00:18:47.165 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.165 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:47.165 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.165 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:47.425 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.425 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.jKuOBbbLrY 00:18:47.425 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:18:47.425 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:18:47.425 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:47.425 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:18:47.425 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:47.425 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:47.425 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:18:47.425 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:47.425 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:47.425 rmmod nvme_rdma 00:18:47.425 rmmod nvme_fabrics 00:18:47.425 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:47.425 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:18:47.425 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:18:47.425 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@515 -- # '[' -n 3447715 ']' 00:18:47.425 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # killprocess 3447715 00:18:47.425 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 3447715 ']' 00:18:47.425 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 3447715 00:18:47.425 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:18:47.425 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:47.425 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3447715 00:18:47.425 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:47.425 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:47.425 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3447715' 00:18:47.425 killing process with pid 3447715 00:18:47.425 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 3447715 00:18:47.425 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 3447715 00:18:47.685 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:47.685 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:18:47.685 00:18:47.685 real 0m9.022s 00:18:47.685 user 0m4.011s 00:18:47.685 sys 0m5.828s 00:18:47.685 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:47.685 18:24:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:47.685 ************************************ 00:18:47.685 END TEST nvmf_async_init 00:18:47.685 ************************************ 00:18:47.685 18:24:00 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:18:47.685 18:24:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:47.685 18:24:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:47.685 18:24:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.685 ************************************ 00:18:47.685 START TEST dma 00:18:47.685 ************************************ 00:18:47.685 18:24:00 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:18:47.945 * Looking for test storage... 00:18:47.945 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:47.945 18:24:00 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:47.945 18:24:00 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lcov --version 00:18:47.945 18:24:00 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:47.945 18:24:00 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:47.945 18:24:00 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:47.945 18:24:00 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:47.945 18:24:00 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:47.945 18:24:00 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:18:47.945 18:24:00 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:18:47.945 18:24:00 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:18:47.945 18:24:00 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:18:47.945 18:24:00 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:18:47.945 18:24:00 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:18:47.945 18:24:00 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:18:47.945 18:24:00 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:47.945 18:24:00 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:18:47.945 18:24:00 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:18:47.945 18:24:00 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:47.945 18:24:00 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:47.945 18:24:00 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:18:47.946 18:24:00 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:18:47.946 18:24:00 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:47.946 18:24:00 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:18:47.946 18:24:00 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:47.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.946 --rc genhtml_branch_coverage=1 00:18:47.946 --rc genhtml_function_coverage=1 00:18:47.946 --rc genhtml_legend=1 00:18:47.946 --rc geninfo_all_blocks=1 00:18:47.946 --rc geninfo_unexecuted_blocks=1 00:18:47.946 00:18:47.946 ' 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:47.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.946 --rc genhtml_branch_coverage=1 00:18:47.946 --rc genhtml_function_coverage=1 00:18:47.946 --rc genhtml_legend=1 00:18:47.946 --rc geninfo_all_blocks=1 00:18:47.946 --rc geninfo_unexecuted_blocks=1 00:18:47.946 00:18:47.946 ' 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:47.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.946 --rc genhtml_branch_coverage=1 00:18:47.946 --rc genhtml_function_coverage=1 00:18:47.946 --rc genhtml_legend=1 00:18:47.946 --rc geninfo_all_blocks=1 00:18:47.946 --rc geninfo_unexecuted_blocks=1 00:18:47.946 00:18:47.946 ' 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:47.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.946 --rc genhtml_branch_coverage=1 00:18:47.946 --rc genhtml_function_coverage=1 00:18:47.946 --rc genhtml_legend=1 00:18:47.946 --rc geninfo_all_blocks=1 00:18:47.946 --rc geninfo_unexecuted_blocks=1 00:18:47.946 00:18:47.946 ' 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:47.946 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- host/dma.sh@18 -- # subsystem=0 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- host/dma.sh@93 -- # nvmftestinit 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@309 -- # xtrace_disable 00:18:47.946 18:24:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:18:56.073 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:56.073 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # pci_devs=() 00:18:56.073 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:56.073 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:56.073 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:56.073 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:56.073 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:56.073 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # net_devs=() 00:18:56.073 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:56.073 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # e810=() 00:18:56.073 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # local -ga e810 00:18:56.073 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # x722=() 00:18:56.073 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # local -ga x722 00:18:56.073 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # mlx=() 00:18:56.073 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # local -ga mlx 00:18:56.073 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:56.073 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:56.073 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:56.073 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:56.073 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:56.073 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:56.073 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:18:56.074 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:18:56.074 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:18:56.074 Found net devices under 0000:18:00.0: mlx_0_0 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:18:56.074 Found net devices under 0000:18:00.1: mlx_0_1 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@440 -- # is_hw=yes 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@446 -- # rdma_device_init 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # uname 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@528 -- # allocate_nic_ips 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:56.074 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:56.074 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:18:56.074 altname enp24s0f0np0 00:18:56.074 altname ens785f0np0 00:18:56.074 inet 192.168.100.8/24 scope global mlx_0_0 00:18:56.074 valid_lft forever preferred_lft forever 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:56.074 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:56.074 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:18:56.074 altname enp24s0f1np1 00:18:56.074 altname ens785f1np1 00:18:56.074 inet 192.168.100.9/24 scope global mlx_0_1 00:18:56.074 valid_lft forever preferred_lft forever 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@448 -- # return 0 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:56.074 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:18:56.075 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:56.075 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:56.075 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:56.075 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:56.075 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:56.075 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:56.075 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:56.075 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:56.075 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:56.075 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:56.075 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:56.075 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:56.075 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:18:56.075 192.168.100.9' 00:18:56.075 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:18:56.075 192.168.100.9' 00:18:56.075 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@483 -- # head -n 1 00:18:56.075 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:56.075 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:18:56.075 192.168.100.9' 00:18:56.075 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # tail -n +2 00:18:56.075 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # head -n 1 00:18:56.075 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:56.075 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:18:56.075 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:56.075 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:18:56.075 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:18:56.075 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:18:56.075 18:24:07 nvmf_rdma.nvmf_host.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:18:56.075 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:56.075 18:24:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:56.075 18:24:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:18:56.075 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@507 -- # nvmfpid=3451372 00:18:56.075 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@508 -- # waitforlisten 3451372 00:18:56.075 18:24:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:56.075 18:24:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@831 -- # '[' -z 3451372 ']' 00:18:56.075 18:24:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.075 18:24:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:56.075 18:24:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.075 18:24:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:56.075 18:24:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:18:56.075 [2024-10-08 18:24:08.026241] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:18:56.075 [2024-10-08 18:24:08.026306] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:56.075 [2024-10-08 18:24:08.112121] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:56.075 [2024-10-08 18:24:08.201541] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:56.075 [2024-10-08 18:24:08.201589] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:56.075 [2024-10-08 18:24:08.201599] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:56.075 [2024-10-08 18:24:08.201623] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:56.075 [2024-10-08 18:24:08.201630] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:56.075 [2024-10-08 18:24:08.202310] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.075 [2024-10-08 18:24:08.202311] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.075 18:24:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:56.075 18:24:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@864 -- # return 0 00:18:56.075 18:24:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:56.075 18:24:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:56.075 18:24:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:18:56.075 18:24:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:56.075 18:24:08 nvmf_rdma.nvmf_host.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:18:56.075 18:24:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.075 18:24:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:18:56.075 [2024-10-08 18:24:08.956158] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d05c20/0x1d0a110) succeed. 00:18:56.075 [2024-10-08 18:24:08.965226] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d07120/0x1d4b7b0) succeed. 00:18:56.075 18:24:09 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.075 18:24:09 nvmf_rdma.nvmf_host.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:18:56.075 18:24:09 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.075 18:24:09 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:18:56.075 Malloc0 00:18:56.075 18:24:09 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.075 18:24:09 nvmf_rdma.nvmf_host.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:18:56.075 18:24:09 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.075 18:24:09 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:18:56.075 18:24:09 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.075 18:24:09 nvmf_rdma.nvmf_host.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:18:56.075 18:24:09 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.075 18:24:09 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:18:56.075 18:24:09 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.075 18:24:09 nvmf_rdma.nvmf_host.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:18:56.075 18:24:09 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.075 18:24:09 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:18:56.075 [2024-10-08 18:24:09.136952] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:56.075 18:24:09 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.075 18:24:09 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:18:56.075 18:24:09 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:18:56.075 18:24:09 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@558 -- # config=() 00:18:56.075 18:24:09 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@558 -- # local subsystem config 00:18:56.075 18:24:09 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:56.075 18:24:09 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:56.075 { 00:18:56.075 "params": { 00:18:56.075 "name": "Nvme$subsystem", 00:18:56.075 "trtype": "$TEST_TRANSPORT", 00:18:56.075 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:56.075 "adrfam": "ipv4", 00:18:56.075 "trsvcid": "$NVMF_PORT", 00:18:56.075 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:56.075 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:56.075 "hdgst": ${hdgst:-false}, 00:18:56.075 "ddgst": ${ddgst:-false} 00:18:56.075 }, 00:18:56.075 "method": "bdev_nvme_attach_controller" 00:18:56.075 } 00:18:56.075 EOF 00:18:56.075 )") 00:18:56.075 18:24:09 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@580 -- # cat 00:18:56.075 18:24:09 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # jq . 00:18:56.075 18:24:09 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@583 -- # IFS=, 00:18:56.075 18:24:09 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:18:56.075 "params": { 00:18:56.075 "name": "Nvme0", 00:18:56.075 "trtype": "rdma", 00:18:56.075 "traddr": "192.168.100.8", 00:18:56.075 "adrfam": "ipv4", 00:18:56.075 "trsvcid": "4420", 00:18:56.075 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:56.075 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:56.075 "hdgst": false, 00:18:56.075 "ddgst": false 00:18:56.075 }, 00:18:56.075 "method": "bdev_nvme_attach_controller" 00:18:56.075 }' 00:18:56.075 [2024-10-08 18:24:09.189641] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:18:56.075 [2024-10-08 18:24:09.189700] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3451571 ] 00:18:56.335 [2024-10-08 18:24:09.274244] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:56.335 [2024-10-08 18:24:09.356471] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:18:56.335 [2024-10-08 18:24:09.356471] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:19:02.904 bdev Nvme0n1 reports 1 memory domains 00:19:02.904 bdev Nvme0n1 supports RDMA memory domain 00:19:02.904 Initialization complete, running randrw IO for 5 sec on 2 cores 00:19:02.904 ========================================================================== 00:19:02.904 Latency [us] 00:19:02.904 IOPS MiB/s Average min max 00:19:02.904 Core 2: 21357.51 83.43 748.51 246.50 6815.18 00:19:02.904 Core 3: 21521.48 84.07 742.77 251.07 6918.26 00:19:02.904 ========================================================================== 00:19:02.904 Total : 42878.99 167.50 745.63 246.50 6918.26 00:19:02.904 00:19:02.904 Total operations: 214428, translate 214428 pull_push 0 memzero 0 00:19:02.905 18:24:14 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:19:02.905 18:24:14 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # gen_malloc_json 00:19:02.905 18:24:14 nvmf_rdma.nvmf_host.dma -- host/dma.sh@21 -- # jq . 00:19:02.905 [2024-10-08 18:24:14.849198] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:19:02.905 [2024-10-08 18:24:14.849262] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3452296 ] 00:19:02.905 [2024-10-08 18:24:14.934040] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:02.905 [2024-10-08 18:24:15.019627] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:19:02.905 [2024-10-08 18:24:15.019628] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:08.193 bdev Malloc0 reports 2 memory domains 00:19:08.193 bdev Malloc0 doesn't support RDMA memory domain 00:19:08.193 Initialization complete, running randrw IO for 5 sec on 2 cores 00:19:08.193 ========================================================================== 00:19:08.193 Latency [us] 00:19:08.193 IOPS MiB/s Average min max 00:19:08.193 Core 2: 14161.73 55.32 1129.11 421.02 2244.71 00:19:08.193 Core 3: 14274.89 55.76 1120.13 442.45 2098.41 00:19:08.193 ========================================================================== 00:19:08.193 Total : 28436.61 111.08 1124.60 421.02 2244.71 00:19:08.193 00:19:08.193 Total operations: 142234, translate 0 pull_push 568936 memzero 0 00:19:08.193 18:24:20 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:19:08.193 18:24:20 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:19:08.193 18:24:20 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:19:08.193 18:24:20 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:19:08.193 Ignoring -M option 00:19:08.193 [2024-10-08 18:24:20.418902] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:19:08.193 [2024-10-08 18:24:20.418961] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3453027 ] 00:19:08.193 [2024-10-08 18:24:20.501549] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:08.193 [2024-10-08 18:24:20.581203] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:08.193 [2024-10-08 18:24:20.581203] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:19:13.469 bdev b1803b57-2282-4937-bb41-400895ce20f8 reports 1 memory domains 00:19:13.469 bdev b1803b57-2282-4937-bb41-400895ce20f8 supports RDMA memory domain 00:19:13.469 Initialization complete, running randread IO for 5 sec on 2 cores 00:19:13.469 ========================================================================== 00:19:13.469 Latency [us] 00:19:13.469 IOPS MiB/s Average min max 00:19:13.469 Core 2: 68452.76 267.39 232.83 97.39 2025.40 00:19:13.469 Core 3: 68990.89 269.50 231.00 84.96 3663.29 00:19:13.469 ========================================================================== 00:19:13.469 Total : 137443.65 536.89 231.91 84.96 3663.29 00:19:13.469 00:19:13.469 Total operations: 687315, translate 0 pull_push 0 memzero 687315 00:19:13.469 18:24:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:19:13.469 [2024-10-08 18:24:26.186634] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:15.376 Initializing NVMe Controllers 00:19:15.376 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:19:15.376 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:19:15.376 Initialization complete. Launching workers. 00:19:15.376 ======================================================== 00:19:15.376 Latency(us) 00:19:15.376 Device Information : IOPS MiB/s Average min max 00:19:15.376 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2014.89 7.87 7940.53 3987.55 10974.72 00:19:15.376 ======================================================== 00:19:15.376 Total : 2014.89 7.87 7940.53 3987.55 10974.72 00:19:15.376 00:19:15.376 18:24:28 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:19:15.376 18:24:28 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:19:15.376 18:24:28 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:19:15.376 18:24:28 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:19:15.376 [2024-10-08 18:24:28.534027] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:19:15.376 [2024-10-08 18:24:28.534097] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3454027 ] 00:19:15.639 [2024-10-08 18:24:28.622962] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:15.639 [2024-10-08 18:24:28.703803] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:15.639 [2024-10-08 18:24:28.703803] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:19:22.275 bdev 8c7ba0de-26fb-4a8b-858e-baf9f8c3620b reports 1 memory domains 00:19:22.275 bdev 8c7ba0de-26fb-4a8b-858e-baf9f8c3620b supports RDMA memory domain 00:19:22.275 Initialization complete, running randrw IO for 5 sec on 2 cores 00:19:22.275 ========================================================================== 00:19:22.276 Latency [us] 00:19:22.276 IOPS MiB/s Average min max 00:19:22.276 Core 2: 18826.70 73.54 849.21 20.82 12389.11 00:19:22.276 Core 3: 19044.68 74.39 839.46 26.80 12361.75 00:19:22.276 ========================================================================== 00:19:22.276 Total : 37871.38 147.94 844.31 20.82 12389.11 00:19:22.276 00:19:22.276 Total operations: 189372, translate 189268 pull_push 0 memzero 104 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.dma -- host/dma.sh@120 -- # nvmftestfini 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@121 -- # sync 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@124 -- # set +e 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:22.276 rmmod nvme_rdma 00:19:22.276 rmmod nvme_fabrics 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@128 -- # set -e 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@129 -- # return 0 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@515 -- # '[' -n 3451372 ']' 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@516 -- # killprocess 3451372 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@950 -- # '[' -z 3451372 ']' 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@954 -- # kill -0 3451372 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@955 -- # uname 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3451372 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3451372' 00:19:22.276 killing process with pid 3451372 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@969 -- # kill 3451372 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@974 -- # wait 3451372 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:19:22.276 00:19:22.276 real 0m33.835s 00:19:22.276 user 1m37.960s 00:19:22.276 sys 0m6.687s 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:19:22.276 ************************************ 00:19:22.276 END TEST dma 00:19:22.276 ************************************ 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.276 ************************************ 00:19:22.276 START TEST nvmf_identify 00:19:22.276 ************************************ 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:19:22.276 * Looking for test storage... 00:19:22.276 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:22.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.276 --rc genhtml_branch_coverage=1 00:19:22.276 --rc genhtml_function_coverage=1 00:19:22.276 --rc genhtml_legend=1 00:19:22.276 --rc geninfo_all_blocks=1 00:19:22.276 --rc geninfo_unexecuted_blocks=1 00:19:22.276 00:19:22.276 ' 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:22.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.276 --rc genhtml_branch_coverage=1 00:19:22.276 --rc genhtml_function_coverage=1 00:19:22.276 --rc genhtml_legend=1 00:19:22.276 --rc geninfo_all_blocks=1 00:19:22.276 --rc geninfo_unexecuted_blocks=1 00:19:22.276 00:19:22.276 ' 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:22.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.276 --rc genhtml_branch_coverage=1 00:19:22.276 --rc genhtml_function_coverage=1 00:19:22.276 --rc genhtml_legend=1 00:19:22.276 --rc geninfo_all_blocks=1 00:19:22.276 --rc geninfo_unexecuted_blocks=1 00:19:22.276 00:19:22.276 ' 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:22.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.276 --rc genhtml_branch_coverage=1 00:19:22.276 --rc genhtml_function_coverage=1 00:19:22.276 --rc genhtml_legend=1 00:19:22.276 --rc geninfo_all_blocks=1 00:19:22.276 --rc geninfo_unexecuted_blocks=1 00:19:22.276 00:19:22.276 ' 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:22.276 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:22.277 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:19:22.277 18:24:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:19:28.851 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:19:28.851 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:28.851 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:19:28.852 Found net devices under 0000:18:00.0: mlx_0_0 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:19:28.852 Found net devices under 0000:18:00.1: mlx_0_1 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # is_hw=yes 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # rdma_device_init 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # uname 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@528 -- # allocate_nic_ips 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:28.852 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:28.852 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:19:28.852 altname enp24s0f0np0 00:19:28.852 altname ens785f0np0 00:19:28.852 inet 192.168.100.8/24 scope global mlx_0_0 00:19:28.852 valid_lft forever preferred_lft forever 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:28.852 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:28.852 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:19:28.852 altname enp24s0f1np1 00:19:28.852 altname ens785f1np1 00:19:28.852 inet 192.168.100.9/24 scope global mlx_0_1 00:19:28.852 valid_lft forever preferred_lft forever 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # return 0 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:19:28.852 192.168.100.9' 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:19:28.852 192.168.100.9' 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # head -n 1 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:19:28.852 192.168.100.9' 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # tail -n +2 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # head -n 1 00:19:28.852 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:28.853 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:19:28.853 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:28.853 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:19:28.853 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:19:28.853 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:19:28.853 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:19:28.853 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:28.853 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:28.853 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3457755 00:19:28.853 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:28.853 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:28.853 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3457755 00:19:28.853 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 3457755 ']' 00:19:28.853 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.853 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:28.853 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.853 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:28.853 18:24:41 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:28.853 [2024-10-08 18:24:41.933547] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:19:28.853 [2024-10-08 18:24:41.933605] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.853 [2024-10-08 18:24:42.001348] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:29.112 [2024-10-08 18:24:42.090473] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:29.112 [2024-10-08 18:24:42.090524] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:29.112 [2024-10-08 18:24:42.090534] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:29.112 [2024-10-08 18:24:42.090543] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:29.112 [2024-10-08 18:24:42.090550] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:29.112 [2024-10-08 18:24:42.094021] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:29.112 [2024-10-08 18:24:42.094061] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:29.112 [2024-10-08 18:24:42.094163] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.112 [2024-10-08 18:24:42.094164] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:19:29.680 18:24:42 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:29.680 18:24:42 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:19:29.680 18:24:42 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:29.680 18:24:42 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.680 18:24:42 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:29.939 [2024-10-08 18:24:42.853312] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x5952e0/0x5997d0) succeed. 00:19:29.939 [2024-10-08 18:24:42.863867] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x596920/0x5dae70) succeed. 00:19:29.939 18:24:42 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.939 18:24:42 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:19:29.939 18:24:42 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:29.939 18:24:42 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:29.939 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:29.939 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.939 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:29.939 Malloc0 00:19:29.939 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.939 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:29.939 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.939 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:29.939 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.939 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:19:29.939 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.939 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:29.939 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.939 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:29.939 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.939 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:29.939 [2024-10-08 18:24:43.088806] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:29.939 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.939 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:19:29.939 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.939 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:29.939 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.939 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:19:29.939 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.939 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:29.939 [ 00:19:29.939 { 00:19:29.939 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:29.939 "subtype": "Discovery", 00:19:29.939 "listen_addresses": [ 00:19:29.939 { 00:19:29.939 "trtype": "RDMA", 00:19:29.939 "adrfam": "IPv4", 00:19:29.939 "traddr": "192.168.100.8", 00:19:29.939 "trsvcid": "4420" 00:19:29.939 } 00:19:29.939 ], 00:19:29.939 "allow_any_host": true, 00:19:30.203 "hosts": [] 00:19:30.203 }, 00:19:30.203 { 00:19:30.203 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:30.203 "subtype": "NVMe", 00:19:30.203 "listen_addresses": [ 00:19:30.203 { 00:19:30.203 "trtype": "RDMA", 00:19:30.203 "adrfam": "IPv4", 00:19:30.203 "traddr": "192.168.100.8", 00:19:30.203 "trsvcid": "4420" 00:19:30.203 } 00:19:30.203 ], 00:19:30.203 "allow_any_host": true, 00:19:30.203 "hosts": [], 00:19:30.203 "serial_number": "SPDK00000000000001", 00:19:30.203 "model_number": "SPDK bdev Controller", 00:19:30.203 "max_namespaces": 32, 00:19:30.203 "min_cntlid": 1, 00:19:30.203 "max_cntlid": 65519, 00:19:30.203 "namespaces": [ 00:19:30.203 { 00:19:30.203 "nsid": 1, 00:19:30.203 "bdev_name": "Malloc0", 00:19:30.203 "name": "Malloc0", 00:19:30.203 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:19:30.203 "eui64": "ABCDEF0123456789", 00:19:30.203 "uuid": "4dc85491-75d8-43e4-bcac-fd973b919402" 00:19:30.203 } 00:19:30.203 ] 00:19:30.203 } 00:19:30.203 ] 00:19:30.203 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.203 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:19:30.203 [2024-10-08 18:24:43.148604] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:19:30.203 [2024-10-08 18:24:43.148647] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3457953 ] 00:19:30.203 [2024-10-08 18:24:43.196069] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:19:30.203 [2024-10-08 18:24:43.196158] nvme_rdma.c:2214:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:19:30.203 [2024-10-08 18:24:43.196182] nvme_rdma.c:1215:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:19:30.203 [2024-10-08 18:24:43.196187] nvme_rdma.c:1219:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:19:30.203 [2024-10-08 18:24:43.196222] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:19:30.203 [2024-10-08 18:24:43.204691] nvme_rdma.c: 431:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:19:30.203 [2024-10-08 18:24:43.218300] nvme_rdma.c:1101:nvme_rdma_connect_established: *DEBUG*: rc =0 00:19:30.203 [2024-10-08 18:24:43.218313] nvme_rdma.c:1106:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:19:30.203 [2024-10-08 18:24:43.218321] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x17fe00 00:19:30.203 [2024-10-08 18:24:43.218329] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x17fe00 00:19:30.203 [2024-10-08 18:24:43.218335] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x17fe00 00:19:30.203 [2024-10-08 18:24:43.218342] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x17fe00 00:19:30.203 [2024-10-08 18:24:43.218348] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x17fe00 00:19:30.203 [2024-10-08 18:24:43.218354] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x17fe00 00:19:30.203 [2024-10-08 18:24:43.218361] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x17fe00 00:19:30.203 [2024-10-08 18:24:43.218367] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x17fe00 00:19:30.203 [2024-10-08 18:24:43.218376] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x17fe00 00:19:30.203 [2024-10-08 18:24:43.218382] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x17fe00 00:19:30.203 [2024-10-08 18:24:43.218389] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x17fe00 00:19:30.203 [2024-10-08 18:24:43.218395] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7b8 length 0x10 lkey 0x17fe00 00:19:30.203 [2024-10-08 18:24:43.218401] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7e0 length 0x10 lkey 0x17fe00 00:19:30.203 [2024-10-08 18:24:43.218407] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf808 length 0x10 lkey 0x17fe00 00:19:30.203 [2024-10-08 18:24:43.218414] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf830 length 0x10 lkey 0x17fe00 00:19:30.203 [2024-10-08 18:24:43.218420] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf858 length 0x10 lkey 0x17fe00 00:19:30.203 [2024-10-08 18:24:43.218426] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf880 length 0x10 lkey 0x17fe00 00:19:30.203 [2024-10-08 18:24:43.218433] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8a8 length 0x10 lkey 0x17fe00 00:19:30.203 [2024-10-08 18:24:43.218439] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8d0 length 0x10 lkey 0x17fe00 00:19:30.203 [2024-10-08 18:24:43.218445] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8f8 length 0x10 lkey 0x17fe00 00:19:30.203 [2024-10-08 18:24:43.218452] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf920 length 0x10 lkey 0x17fe00 00:19:30.203 [2024-10-08 18:24:43.218458] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf948 length 0x10 lkey 0x17fe00 00:19:30.203 [2024-10-08 18:24:43.218464] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf970 length 0x10 lkey 0x17fe00 00:19:30.203 [2024-10-08 18:24:43.218470] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf998 length 0x10 lkey 0x17fe00 00:19:30.203 [2024-10-08 18:24:43.218477] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9c0 length 0x10 lkey 0x17fe00 00:19:30.203 [2024-10-08 18:24:43.218483] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9e8 length 0x10 lkey 0x17fe00 00:19:30.203 [2024-10-08 18:24:43.218489] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa10 length 0x10 lkey 0x17fe00 00:19:30.203 [2024-10-08 18:24:43.218496] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa38 length 0x10 lkey 0x17fe00 00:19:30.203 [2024-10-08 18:24:43.218502] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa60 length 0x10 lkey 0x17fe00 00:19:30.203 [2024-10-08 18:24:43.218508] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa88 length 0x10 lkey 0x17fe00 00:19:30.203 [2024-10-08 18:24:43.218514] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfab0 length 0x10 lkey 0x17fe00 00:19:30.203 [2024-10-08 18:24:43.218520] nvme_rdma.c:1120:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:19:30.203 [2024-10-08 18:24:43.218526] nvme_rdma.c:1123:nvme_rdma_connect_established: *DEBUG*: rc =0 00:19:30.203 [2024-10-08 18:24:43.218531] nvme_rdma.c:1128:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:19:30.203 [2024-10-08 18:24:43.218550] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x17fe00 00:19:30.203 [2024-10-08 18:24:43.218565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf180 len:0x400 key:0x17fe00 00:19:30.203 [2024-10-08 18:24:43.224003] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.203 [2024-10-08 18:24:43.224014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:19:30.203 [2024-10-08 18:24:43.224024] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x17fe00 00:19:30.203 [2024-10-08 18:24:43.224034] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:30.203 [2024-10-08 18:24:43.224045] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:19:30.203 [2024-10-08 18:24:43.224052] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:19:30.203 [2024-10-08 18:24:43.224069] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x17fe00 00:19:30.203 [2024-10-08 18:24:43.224077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.203 [2024-10-08 18:24:43.224106] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.203 [2024-10-08 18:24:43.224112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:19:30.203 [2024-10-08 18:24:43.224119] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:19:30.203 [2024-10-08 18:24:43.224125] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x17fe00 00:19:30.203 [2024-10-08 18:24:43.224132] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:19:30.203 [2024-10-08 18:24:43.224141] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x17fe00 00:19:30.203 [2024-10-08 18:24:43.224148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.203 [2024-10-08 18:24:43.224164] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.203 [2024-10-08 18:24:43.224170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:19:30.203 [2024-10-08 18:24:43.224177] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:19:30.203 [2024-10-08 18:24:43.224183] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x17fe00 00:19:30.204 [2024-10-08 18:24:43.224191] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:19:30.204 [2024-10-08 18:24:43.224199] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x17fe00 00:19:30.204 [2024-10-08 18:24:43.224207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.204 [2024-10-08 18:24:43.224225] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.204 [2024-10-08 18:24:43.224230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:30.204 [2024-10-08 18:24:43.224238] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:30.204 [2024-10-08 18:24:43.224244] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x17fe00 00:19:30.204 [2024-10-08 18:24:43.224252] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x17fe00 00:19:30.204 [2024-10-08 18:24:43.224260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.204 [2024-10-08 18:24:43.224282] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.204 [2024-10-08 18:24:43.224287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:30.204 [2024-10-08 18:24:43.224294] nvme_ctrlr.c:3924:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:19:30.204 [2024-10-08 18:24:43.224300] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:19:30.204 [2024-10-08 18:24:43.224308] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x17fe00 00:19:30.204 [2024-10-08 18:24:43.224315] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:30.204 [2024-10-08 18:24:43.224423] nvme_ctrlr.c:4122:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:19:30.204 [2024-10-08 18:24:43.224429] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:30.204 [2024-10-08 18:24:43.224440] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x17fe00 00:19:30.204 [2024-10-08 18:24:43.224448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.204 [2024-10-08 18:24:43.224467] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.204 [2024-10-08 18:24:43.224473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:30.204 [2024-10-08 18:24:43.224479] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:30.204 [2024-10-08 18:24:43.224485] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x17fe00 00:19:30.204 [2024-10-08 18:24:43.224494] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x17fe00 00:19:30.204 [2024-10-08 18:24:43.224502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.204 [2024-10-08 18:24:43.224519] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.204 [2024-10-08 18:24:43.224525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:19:30.204 [2024-10-08 18:24:43.224532] nvme_ctrlr.c:3959:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:30.204 [2024-10-08 18:24:43.224538] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:19:30.204 [2024-10-08 18:24:43.224544] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x17fe00 00:19:30.204 [2024-10-08 18:24:43.224551] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:19:30.204 [2024-10-08 18:24:43.224565] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:19:30.204 [2024-10-08 18:24:43.224576] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x17fe00 00:19:30.204 [2024-10-08 18:24:43.224584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x17fe00 00:19:30.204 [2024-10-08 18:24:43.224617] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.204 [2024-10-08 18:24:43.224623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:30.204 [2024-10-08 18:24:43.224632] nvme_ctrlr.c:2097:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:19:30.204 [2024-10-08 18:24:43.224641] nvme_ctrlr.c:2101:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:19:30.204 [2024-10-08 18:24:43.224647] nvme_ctrlr.c:2104:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:19:30.204 [2024-10-08 18:24:43.224656] nvme_ctrlr.c:2128:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:19:30.204 [2024-10-08 18:24:43.224663] nvme_ctrlr.c:2143:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:19:30.204 [2024-10-08 18:24:43.224668] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:19:30.204 [2024-10-08 18:24:43.224675] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x17fe00 00:19:30.204 [2024-10-08 18:24:43.224682] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:19:30.204 [2024-10-08 18:24:43.224690] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x17fe00 00:19:30.204 [2024-10-08 18:24:43.224699] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.204 [2024-10-08 18:24:43.224724] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.204 [2024-10-08 18:24:43.224730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:30.204 [2024-10-08 18:24:43.224740] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x17fe00 00:19:30.204 [2024-10-08 18:24:43.224747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.204 [2024-10-08 18:24:43.224754] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d05c0 length 0x40 lkey 0x17fe00 00:19:30.204 [2024-10-08 18:24:43.224761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.204 [2024-10-08 18:24:43.224768] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.204 [2024-10-08 18:24:43.224775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.204 [2024-10-08 18:24:43.224782] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0840 length 0x40 lkey 0x17fe00 00:19:30.204 [2024-10-08 18:24:43.224789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.204 [2024-10-08 18:24:43.224795] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:19:30.204 [2024-10-08 18:24:43.224801] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x17fe00 00:19:30.204 [2024-10-08 18:24:43.224813] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:30.204 [2024-10-08 18:24:43.224821] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x17fe00 00:19:30.204 [2024-10-08 18:24:43.224829] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.204 [2024-10-08 18:24:43.224847] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.204 [2024-10-08 18:24:43.224853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:19:30.204 [2024-10-08 18:24:43.224860] nvme_ctrlr.c:3077:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:19:30.204 [2024-10-08 18:24:43.224867] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:19:30.204 [2024-10-08 18:24:43.224873] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x17fe00 00:19:30.204 [2024-10-08 18:24:43.224885] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x17fe00 00:19:30.204 [2024-10-08 18:24:43.224894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x17fe00 00:19:30.204 [2024-10-08 18:24:43.224923] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.204 [2024-10-08 18:24:43.224929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:30.204 [2024-10-08 18:24:43.224937] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x17fe00 00:19:30.204 [2024-10-08 18:24:43.224947] nvme_ctrlr.c:4220:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:19:30.204 [2024-10-08 18:24:43.224973] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x17fe00 00:19:30.204 [2024-10-08 18:24:43.224982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x17fe00 00:19:30.204 [2024-10-08 18:24:43.224990] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x17fe00 00:19:30.204 [2024-10-08 18:24:43.224997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.204 [2024-10-08 18:24:43.225021] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.204 [2024-10-08 18:24:43.225027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:30.204 [2024-10-08 18:24:43.225040] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0ac0 length 0x40 lkey 0x17fe00 00:19:30.204 [2024-10-08 18:24:43.225047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x17fe00 00:19:30.204 [2024-10-08 18:24:43.225054] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b8 length 0x10 lkey 0x17fe00 00:19:30.204 [2024-10-08 18:24:43.225060] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.204 [2024-10-08 18:24:43.225065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:30.204 [2024-10-08 18:24:43.225072] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e0 length 0x10 lkey 0x17fe00 00:19:30.204 [2024-10-08 18:24:43.225078] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.204 [2024-10-08 18:24:43.225084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:30.204 [2024-10-08 18:24:43.225094] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x17fe00 00:19:30.204 [2024-10-08 18:24:43.225101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x17fe00 00:19:30.205 [2024-10-08 18:24:43.225108] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf808 length 0x10 lkey 0x17fe00 00:19:30.205 [2024-10-08 18:24:43.225124] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.205 [2024-10-08 18:24:43.225129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:30.205 [2024-10-08 18:24:43.225141] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf830 length 0x10 lkey 0x17fe00 00:19:30.205 ===================================================== 00:19:30.205 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:30.205 ===================================================== 00:19:30.205 Controller Capabilities/Features 00:19:30.205 ================================ 00:19:30.205 Vendor ID: 0000 00:19:30.205 Subsystem Vendor ID: 0000 00:19:30.205 Serial Number: .................... 00:19:30.205 Model Number: ........................................ 00:19:30.205 Firmware Version: 25.01 00:19:30.205 Recommended Arb Burst: 0 00:19:30.205 IEEE OUI Identifier: 00 00 00 00:19:30.205 Multi-path I/O 00:19:30.205 May have multiple subsystem ports: No 00:19:30.205 May have multiple controllers: No 00:19:30.205 Associated with SR-IOV VF: No 00:19:30.205 Max Data Transfer Size: 131072 00:19:30.205 Max Number of Namespaces: 0 00:19:30.205 Max Number of I/O Queues: 1024 00:19:30.205 NVMe Specification Version (VS): 1.3 00:19:30.205 NVMe Specification Version (Identify): 1.3 00:19:30.205 Maximum Queue Entries: 128 00:19:30.205 Contiguous Queues Required: Yes 00:19:30.205 Arbitration Mechanisms Supported 00:19:30.205 Weighted Round Robin: Not Supported 00:19:30.205 Vendor Specific: Not Supported 00:19:30.205 Reset Timeout: 15000 ms 00:19:30.205 Doorbell Stride: 4 bytes 00:19:30.205 NVM Subsystem Reset: Not Supported 00:19:30.205 Command Sets Supported 00:19:30.205 NVM Command Set: Supported 00:19:30.205 Boot Partition: Not Supported 00:19:30.205 Memory Page Size Minimum: 4096 bytes 00:19:30.205 Memory Page Size Maximum: 4096 bytes 00:19:30.205 Persistent Memory Region: Not Supported 00:19:30.205 Optional Asynchronous Events Supported 00:19:30.205 Namespace Attribute Notices: Not Supported 00:19:30.205 Firmware Activation Notices: Not Supported 00:19:30.205 ANA Change Notices: Not Supported 00:19:30.205 PLE Aggregate Log Change Notices: Not Supported 00:19:30.205 LBA Status Info Alert Notices: Not Supported 00:19:30.205 EGE Aggregate Log Change Notices: Not Supported 00:19:30.205 Normal NVM Subsystem Shutdown event: Not Supported 00:19:30.205 Zone Descriptor Change Notices: Not Supported 00:19:30.205 Discovery Log Change Notices: Supported 00:19:30.205 Controller Attributes 00:19:30.205 128-bit Host Identifier: Not Supported 00:19:30.205 Non-Operational Permissive Mode: Not Supported 00:19:30.205 NVM Sets: Not Supported 00:19:30.205 Read Recovery Levels: Not Supported 00:19:30.205 Endurance Groups: Not Supported 00:19:30.205 Predictable Latency Mode: Not Supported 00:19:30.205 Traffic Based Keep ALive: Not Supported 00:19:30.205 Namespace Granularity: Not Supported 00:19:30.205 SQ Associations: Not Supported 00:19:30.205 UUID List: Not Supported 00:19:30.205 Multi-Domain Subsystem: Not Supported 00:19:30.205 Fixed Capacity Management: Not Supported 00:19:30.205 Variable Capacity Management: Not Supported 00:19:30.205 Delete Endurance Group: Not Supported 00:19:30.205 Delete NVM Set: Not Supported 00:19:30.205 Extended LBA Formats Supported: Not Supported 00:19:30.205 Flexible Data Placement Supported: Not Supported 00:19:30.205 00:19:30.205 Controller Memory Buffer Support 00:19:30.205 ================================ 00:19:30.205 Supported: No 00:19:30.205 00:19:30.205 Persistent Memory Region Support 00:19:30.205 ================================ 00:19:30.205 Supported: No 00:19:30.205 00:19:30.205 Admin Command Set Attributes 00:19:30.205 ============================ 00:19:30.205 Security Send/Receive: Not Supported 00:19:30.205 Format NVM: Not Supported 00:19:30.205 Firmware Activate/Download: Not Supported 00:19:30.205 Namespace Management: Not Supported 00:19:30.205 Device Self-Test: Not Supported 00:19:30.205 Directives: Not Supported 00:19:30.205 NVMe-MI: Not Supported 00:19:30.205 Virtualization Management: Not Supported 00:19:30.205 Doorbell Buffer Config: Not Supported 00:19:30.205 Get LBA Status Capability: Not Supported 00:19:30.205 Command & Feature Lockdown Capability: Not Supported 00:19:30.205 Abort Command Limit: 1 00:19:30.205 Async Event Request Limit: 4 00:19:30.205 Number of Firmware Slots: N/A 00:19:30.205 Firmware Slot 1 Read-Only: N/A 00:19:30.205 Firmware Activation Without Reset: N/A 00:19:30.205 Multiple Update Detection Support: N/A 00:19:30.205 Firmware Update Granularity: No Information Provided 00:19:30.205 Per-Namespace SMART Log: No 00:19:30.205 Asymmetric Namespace Access Log Page: Not Supported 00:19:30.205 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:30.205 Command Effects Log Page: Not Supported 00:19:30.205 Get Log Page Extended Data: Supported 00:19:30.205 Telemetry Log Pages: Not Supported 00:19:30.205 Persistent Event Log Pages: Not Supported 00:19:30.205 Supported Log Pages Log Page: May Support 00:19:30.205 Commands Supported & Effects Log Page: Not Supported 00:19:30.205 Feature Identifiers & Effects Log Page:May Support 00:19:30.205 NVMe-MI Commands & Effects Log Page: May Support 00:19:30.205 Data Area 4 for Telemetry Log: Not Supported 00:19:30.205 Error Log Page Entries Supported: 128 00:19:30.205 Keep Alive: Not Supported 00:19:30.205 00:19:30.205 NVM Command Set Attributes 00:19:30.205 ========================== 00:19:30.205 Submission Queue Entry Size 00:19:30.205 Max: 1 00:19:30.205 Min: 1 00:19:30.205 Completion Queue Entry Size 00:19:30.205 Max: 1 00:19:30.205 Min: 1 00:19:30.205 Number of Namespaces: 0 00:19:30.205 Compare Command: Not Supported 00:19:30.205 Write Uncorrectable Command: Not Supported 00:19:30.205 Dataset Management Command: Not Supported 00:19:30.205 Write Zeroes Command: Not Supported 00:19:30.205 Set Features Save Field: Not Supported 00:19:30.205 Reservations: Not Supported 00:19:30.205 Timestamp: Not Supported 00:19:30.205 Copy: Not Supported 00:19:30.205 Volatile Write Cache: Not Present 00:19:30.205 Atomic Write Unit (Normal): 1 00:19:30.205 Atomic Write Unit (PFail): 1 00:19:30.205 Atomic Compare & Write Unit: 1 00:19:30.205 Fused Compare & Write: Supported 00:19:30.205 Scatter-Gather List 00:19:30.205 SGL Command Set: Supported 00:19:30.205 SGL Keyed: Supported 00:19:30.205 SGL Bit Bucket Descriptor: Not Supported 00:19:30.205 SGL Metadata Pointer: Not Supported 00:19:30.205 Oversized SGL: Not Supported 00:19:30.205 SGL Metadata Address: Not Supported 00:19:30.205 SGL Offset: Supported 00:19:30.205 Transport SGL Data Block: Not Supported 00:19:30.205 Replay Protected Memory Block: Not Supported 00:19:30.205 00:19:30.205 Firmware Slot Information 00:19:30.205 ========================= 00:19:30.205 Active slot: 0 00:19:30.205 00:19:30.205 00:19:30.205 Error Log 00:19:30.205 ========= 00:19:30.205 00:19:30.205 Active Namespaces 00:19:30.205 ================= 00:19:30.205 Discovery Log Page 00:19:30.205 ================== 00:19:30.205 Generation Counter: 2 00:19:30.205 Number of Records: 2 00:19:30.205 Record Format: 0 00:19:30.205 00:19:30.205 Discovery Log Entry 0 00:19:30.205 ---------------------- 00:19:30.205 Transport Type: 1 (RDMA) 00:19:30.205 Address Family: 1 (IPv4) 00:19:30.205 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:30.205 Entry Flags: 00:19:30.205 Duplicate Returned Information: 1 00:19:30.205 Explicit Persistent Connection Support for Discovery: 1 00:19:30.205 Transport Requirements: 00:19:30.205 Secure Channel: Not Required 00:19:30.205 Port ID: 0 (0x0000) 00:19:30.205 Controller ID: 65535 (0xffff) 00:19:30.205 Admin Max SQ Size: 128 00:19:30.205 Transport Service Identifier: 4420 00:19:30.205 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:30.205 Transport Address: 192.168.100.8 00:19:30.205 Transport Specific Address Subtype - RDMA 00:19:30.205 RDMA QP Service Type: 1 (Reliable Connected) 00:19:30.205 RDMA Provider Type: 1 (No provider specified) 00:19:30.205 RDMA CM Service: 1 (RDMA_CM) 00:19:30.205 Discovery Log Entry 1 00:19:30.205 ---------------------- 00:19:30.205 Transport Type: 1 (RDMA) 00:19:30.205 Address Family: 1 (IPv4) 00:19:30.205 Subsystem Type: 2 (NVM Subsystem) 00:19:30.205 Entry Flags: 00:19:30.205 Duplicate Returned Information: 0 00:19:30.205 Explicit Persistent Connection Support for Discovery: 0 00:19:30.205 Transport Requirements: 00:19:30.205 Secure Channel: Not Required 00:19:30.205 Port ID: 0 (0x0000) 00:19:30.205 Controller ID: 65535 (0xffff) 00:19:30.205 Admin Max SQ Size: [2024-10-08 18:24:43.225214] nvme_ctrlr.c:4417:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:19:30.205 [2024-10-08 18:24:43.225225] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 9012 doesn't match qid 00:19:30.205 [2024-10-08 18:24:43.225240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32509 cdw0:5 sqhd:86b0 p:0 m:0 dnr:0 00:19:30.206 [2024-10-08 18:24:43.225247] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 9012 doesn't match qid 00:19:30.206 [2024-10-08 18:24:43.225255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32509 cdw0:5 sqhd:86b0 p:0 m:0 dnr:0 00:19:30.206 [2024-10-08 18:24:43.225262] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 9012 doesn't match qid 00:19:30.206 [2024-10-08 18:24:43.225270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32509 cdw0:5 sqhd:86b0 p:0 m:0 dnr:0 00:19:30.206 [2024-10-08 18:24:43.225277] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 9012 doesn't match qid 00:19:30.206 [2024-10-08 18:24:43.225285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32509 cdw0:5 sqhd:86b0 p:0 m:0 dnr:0 00:19:30.206 [2024-10-08 18:24:43.225297] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0840 length 0x40 lkey 0x17fe00 00:19:30.206 [2024-10-08 18:24:43.225305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.206 [2024-10-08 18:24:43.225322] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.206 [2024-10-08 18:24:43.225327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:19:30.206 [2024-10-08 18:24:43.225336] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.206 [2024-10-08 18:24:43.225344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.206 [2024-10-08 18:24:43.225350] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf858 length 0x10 lkey 0x17fe00 00:19:30.206 [2024-10-08 18:24:43.225363] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.206 [2024-10-08 18:24:43.225369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:30.206 [2024-10-08 18:24:43.225376] nvme_ctrlr.c:1167:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:19:30.206 [2024-10-08 18:24:43.225382] nvme_ctrlr.c:1170:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:19:30.206 [2024-10-08 18:24:43.225388] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf880 length 0x10 lkey 0x17fe00 00:19:30.206 [2024-10-08 18:24:43.225397] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.206 [2024-10-08 18:24:43.225404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.206 [2024-10-08 18:24:43.225425] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.206 [2024-10-08 18:24:43.225431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:19:30.206 [2024-10-08 18:24:43.225437] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a8 length 0x10 lkey 0x17fe00 00:19:30.206 [2024-10-08 18:24:43.225446] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.206 [2024-10-08 18:24:43.225454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.206 [2024-10-08 18:24:43.225475] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.206 [2024-10-08 18:24:43.225481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:19:30.206 [2024-10-08 18:24:43.225487] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d0 length 0x10 lkey 0x17fe00 00:19:30.206 [2024-10-08 18:24:43.225498] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.206 [2024-10-08 18:24:43.225507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.206 [2024-10-08 18:24:43.225525] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.206 [2024-10-08 18:24:43.225531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:19:30.206 [2024-10-08 18:24:43.225538] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f8 length 0x10 lkey 0x17fe00 00:19:30.206 [2024-10-08 18:24:43.225547] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.206 [2024-10-08 18:24:43.225555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.206 [2024-10-08 18:24:43.225573] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.206 [2024-10-08 18:24:43.225579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:19:30.206 [2024-10-08 18:24:43.225585] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf920 length 0x10 lkey 0x17fe00 00:19:30.206 [2024-10-08 18:24:43.225594] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.206 [2024-10-08 18:24:43.225602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.206 [2024-10-08 18:24:43.225616] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.206 [2024-10-08 18:24:43.225622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:19:30.206 [2024-10-08 18:24:43.225629] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf948 length 0x10 lkey 0x17fe00 00:19:30.206 [2024-10-08 18:24:43.225638] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.206 [2024-10-08 18:24:43.225646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.206 [2024-10-08 18:24:43.225662] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.206 [2024-10-08 18:24:43.225668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:19:30.206 [2024-10-08 18:24:43.225675] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf970 length 0x10 lkey 0x17fe00 00:19:30.206 [2024-10-08 18:24:43.225685] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.206 [2024-10-08 18:24:43.225693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.206 [2024-10-08 18:24:43.225714] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.206 [2024-10-08 18:24:43.225720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:19:30.206 [2024-10-08 18:24:43.225727] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf998 length 0x10 lkey 0x17fe00 00:19:30.206 [2024-10-08 18:24:43.225736] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.206 [2024-10-08 18:24:43.225744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.206 [2024-10-08 18:24:43.225760] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.206 [2024-10-08 18:24:43.225766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:19:30.206 [2024-10-08 18:24:43.225772] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c0 length 0x10 lkey 0x17fe00 00:19:30.206 [2024-10-08 18:24:43.225783] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.206 [2024-10-08 18:24:43.225790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.206 [2024-10-08 18:24:43.225807] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.206 [2024-10-08 18:24:43.225812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:19:30.206 [2024-10-08 18:24:43.225819] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e8 length 0x10 lkey 0x17fe00 00:19:30.206 [2024-10-08 18:24:43.225828] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.206 [2024-10-08 18:24:43.225835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.206 [2024-10-08 18:24:43.225851] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.206 [2024-10-08 18:24:43.225857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:19:30.206 [2024-10-08 18:24:43.225863] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa10 length 0x10 lkey 0x17fe00 00:19:30.206 [2024-10-08 18:24:43.225872] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.206 [2024-10-08 18:24:43.225880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.206 [2024-10-08 18:24:43.225900] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.206 [2024-10-08 18:24:43.225907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:19:30.206 [2024-10-08 18:24:43.225914] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa38 length 0x10 lkey 0x17fe00 00:19:30.206 [2024-10-08 18:24:43.225923] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.206 [2024-10-08 18:24:43.225931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.206 [2024-10-08 18:24:43.225947] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.206 [2024-10-08 18:24:43.225953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:19:30.206 [2024-10-08 18:24:43.225959] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa60 length 0x10 lkey 0x17fe00 00:19:30.206 [2024-10-08 18:24:43.225968] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.206 [2024-10-08 18:24:43.225976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.206 [2024-10-08 18:24:43.225997] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.206 [2024-10-08 18:24:43.226009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:19:30.206 [2024-10-08 18:24:43.226015] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa88 length 0x10 lkey 0x17fe00 00:19:30.206 [2024-10-08 18:24:43.226024] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.206 [2024-10-08 18:24:43.226032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.206 [2024-10-08 18:24:43.226052] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.206 [2024-10-08 18:24:43.226058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:19:30.206 [2024-10-08 18:24:43.226066] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab0 length 0x10 lkey 0x17fe00 00:19:30.206 [2024-10-08 18:24:43.226075] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.206 [2024-10-08 18:24:43.226083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.206 [2024-10-08 18:24:43.226106] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.206 [2024-10-08 18:24:43.226112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:19:30.206 [2024-10-08 18:24:43.226118] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x17fe00 00:19:30.206 [2024-10-08 18:24:43.226127] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.207 [2024-10-08 18:24:43.226135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.207 [2024-10-08 18:24:43.226150] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.207 [2024-10-08 18:24:43.226155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:19:30.207 [2024-10-08 18:24:43.226162] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x17fe00 00:19:30.207 [2024-10-08 18:24:43.226171] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.207 [2024-10-08 18:24:43.226178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.207 [2024-10-08 18:24:43.226200] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.207 [2024-10-08 18:24:43.226205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:19:30.207 [2024-10-08 18:24:43.226213] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x17fe00 00:19:30.207 [2024-10-08 18:24:43.226222] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.207 [2024-10-08 18:24:43.226229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.207 [2024-10-08 18:24:43.226251] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.207 [2024-10-08 18:24:43.226257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:19:30.207 [2024-10-08 18:24:43.226265] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x17fe00 00:19:30.207 [2024-10-08 18:24:43.226274] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.207 [2024-10-08 18:24:43.226283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.207 [2024-10-08 18:24:43.226302] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.207 [2024-10-08 18:24:43.226307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:19:30.207 [2024-10-08 18:24:43.226314] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x17fe00 00:19:30.207 [2024-10-08 18:24:43.226322] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.207 [2024-10-08 18:24:43.226331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.207 [2024-10-08 18:24:43.226350] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.207 [2024-10-08 18:24:43.226356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:19:30.207 [2024-10-08 18:24:43.226366] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x17fe00 00:19:30.207 [2024-10-08 18:24:43.226375] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.207 [2024-10-08 18:24:43.226383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.207 [2024-10-08 18:24:43.226399] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.207 [2024-10-08 18:24:43.226405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:19:30.207 [2024-10-08 18:24:43.226413] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x17fe00 00:19:30.207 [2024-10-08 18:24:43.226422] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.207 [2024-10-08 18:24:43.226430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.207 [2024-10-08 18:24:43.226454] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.207 [2024-10-08 18:24:43.226460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:19:30.207 [2024-10-08 18:24:43.226466] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x17fe00 00:19:30.207 [2024-10-08 18:24:43.226475] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.207 [2024-10-08 18:24:43.226482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.207 [2024-10-08 18:24:43.226501] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.207 [2024-10-08 18:24:43.226506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:19:30.207 [2024-10-08 18:24:43.226513] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x17fe00 00:19:30.207 [2024-10-08 18:24:43.226522] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.207 [2024-10-08 18:24:43.226529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.207 [2024-10-08 18:24:43.226548] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.207 [2024-10-08 18:24:43.226553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:19:30.207 [2024-10-08 18:24:43.226560] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x17fe00 00:19:30.207 [2024-10-08 18:24:43.226568] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.207 [2024-10-08 18:24:43.226576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.207 [2024-10-08 18:24:43.226598] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.207 [2024-10-08 18:24:43.226604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:19:30.207 [2024-10-08 18:24:43.226610] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x17fe00 00:19:30.207 [2024-10-08 18:24:43.226619] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.207 [2024-10-08 18:24:43.226627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.207 [2024-10-08 18:24:43.226645] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.207 [2024-10-08 18:24:43.226651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:19:30.207 [2024-10-08 18:24:43.226659] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b8 length 0x10 lkey 0x17fe00 00:19:30.207 [2024-10-08 18:24:43.226668] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.207 [2024-10-08 18:24:43.226675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.207 [2024-10-08 18:24:43.226695] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.207 [2024-10-08 18:24:43.226701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:19:30.207 [2024-10-08 18:24:43.226708] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e0 length 0x10 lkey 0x17fe00 00:19:30.207 [2024-10-08 18:24:43.226716] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.207 [2024-10-08 18:24:43.226724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.207 [2024-10-08 18:24:43.226744] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.207 [2024-10-08 18:24:43.226750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:19:30.207 [2024-10-08 18:24:43.226756] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf808 length 0x10 lkey 0x17fe00 00:19:30.207 [2024-10-08 18:24:43.226765] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.207 [2024-10-08 18:24:43.226773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.207 [2024-10-08 18:24:43.226789] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.207 [2024-10-08 18:24:43.226794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:19:30.207 [2024-10-08 18:24:43.226801] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf830 length 0x10 lkey 0x17fe00 00:19:30.207 [2024-10-08 18:24:43.226810] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.207 [2024-10-08 18:24:43.226817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.207 [2024-10-08 18:24:43.226837] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.207 [2024-10-08 18:24:43.226843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:19:30.207 [2024-10-08 18:24:43.226849] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf858 length 0x10 lkey 0x17fe00 00:19:30.207 [2024-10-08 18:24:43.226858] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.207 [2024-10-08 18:24:43.226866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.207 [2024-10-08 18:24:43.226882] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.207 [2024-10-08 18:24:43.226888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:19:30.207 [2024-10-08 18:24:43.226894] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf880 length 0x10 lkey 0x17fe00 00:19:30.207 [2024-10-08 18:24:43.226903] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.208 [2024-10-08 18:24:43.226911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.208 [2024-10-08 18:24:43.226931] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.208 [2024-10-08 18:24:43.226938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:19:30.208 [2024-10-08 18:24:43.226944] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a8 length 0x10 lkey 0x17fe00 00:19:30.208 [2024-10-08 18:24:43.226953] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.208 [2024-10-08 18:24:43.226961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.208 [2024-10-08 18:24:43.226985] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.208 [2024-10-08 18:24:43.226990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:19:30.208 [2024-10-08 18:24:43.226997] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d0 length 0x10 lkey 0x17fe00 00:19:30.208 [2024-10-08 18:24:43.227010] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.208 [2024-10-08 18:24:43.227018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.208 [2024-10-08 18:24:43.227036] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.208 [2024-10-08 18:24:43.227042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:19:30.208 [2024-10-08 18:24:43.227048] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f8 length 0x10 lkey 0x17fe00 00:19:30.208 [2024-10-08 18:24:43.227057] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.208 [2024-10-08 18:24:43.227065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.208 [2024-10-08 18:24:43.227081] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.208 [2024-10-08 18:24:43.227087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:19:30.208 [2024-10-08 18:24:43.227093] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf920 length 0x10 lkey 0x17fe00 00:19:30.208 [2024-10-08 18:24:43.227102] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.208 [2024-10-08 18:24:43.227110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.208 [2024-10-08 18:24:43.227126] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.208 [2024-10-08 18:24:43.227131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:19:30.208 [2024-10-08 18:24:43.227138] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf948 length 0x10 lkey 0x17fe00 00:19:30.208 [2024-10-08 18:24:43.227147] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.208 [2024-10-08 18:24:43.227154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.208 [2024-10-08 18:24:43.227174] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.208 [2024-10-08 18:24:43.227180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:19:30.208 [2024-10-08 18:24:43.227186] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf970 length 0x10 lkey 0x17fe00 00:19:30.208 [2024-10-08 18:24:43.227195] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.208 [2024-10-08 18:24:43.227203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.208 [2024-10-08 18:24:43.227220] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.208 [2024-10-08 18:24:43.227226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:19:30.208 [2024-10-08 18:24:43.227233] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf998 length 0x10 lkey 0x17fe00 00:19:30.208 [2024-10-08 18:24:43.227242] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.208 [2024-10-08 18:24:43.227249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.208 [2024-10-08 18:24:43.227265] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.208 [2024-10-08 18:24:43.227271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:19:30.208 [2024-10-08 18:24:43.227277] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c0 length 0x10 lkey 0x17fe00 00:19:30.208 [2024-10-08 18:24:43.227286] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.208 [2024-10-08 18:24:43.227294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.208 [2024-10-08 18:24:43.227316] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.208 [2024-10-08 18:24:43.227322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:19:30.208 [2024-10-08 18:24:43.227328] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e8 length 0x10 lkey 0x17fe00 00:19:30.208 [2024-10-08 18:24:43.227337] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.208 [2024-10-08 18:24:43.227345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.208 [2024-10-08 18:24:43.227363] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.208 [2024-10-08 18:24:43.227368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:19:30.208 [2024-10-08 18:24:43.227375] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa10 length 0x10 lkey 0x17fe00 00:19:30.208 [2024-10-08 18:24:43.227384] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.208 [2024-10-08 18:24:43.227391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.208 [2024-10-08 18:24:43.227413] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.208 [2024-10-08 18:24:43.227419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:19:30.208 [2024-10-08 18:24:43.227425] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa38 length 0x10 lkey 0x17fe00 00:19:30.208 [2024-10-08 18:24:43.227434] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.208 [2024-10-08 18:24:43.227442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.208 [2024-10-08 18:24:43.227462] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.208 [2024-10-08 18:24:43.227468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:19:30.208 [2024-10-08 18:24:43.227474] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa60 length 0x10 lkey 0x17fe00 00:19:30.208 [2024-10-08 18:24:43.227483] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.208 [2024-10-08 18:24:43.227491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.208 [2024-10-08 18:24:43.227510] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.208 [2024-10-08 18:24:43.227516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:19:30.208 [2024-10-08 18:24:43.227522] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa88 length 0x10 lkey 0x17fe00 00:19:30.208 [2024-10-08 18:24:43.227531] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.208 [2024-10-08 18:24:43.227539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.208 [2024-10-08 18:24:43.227559] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.208 [2024-10-08 18:24:43.227565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:19:30.208 [2024-10-08 18:24:43.227571] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab0 length 0x10 lkey 0x17fe00 00:19:30.208 [2024-10-08 18:24:43.227580] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.208 [2024-10-08 18:24:43.227588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.208 [2024-10-08 18:24:43.227604] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.208 [2024-10-08 18:24:43.227609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:19:30.208 [2024-10-08 18:24:43.227616] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x17fe00 00:19:30.208 [2024-10-08 18:24:43.227625] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.208 [2024-10-08 18:24:43.227632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.208 [2024-10-08 18:24:43.227650] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.208 [2024-10-08 18:24:43.227656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:19:30.208 [2024-10-08 18:24:43.227662] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x17fe00 00:19:30.208 [2024-10-08 18:24:43.227671] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.208 [2024-10-08 18:24:43.227679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.208 [2024-10-08 18:24:43.227695] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.208 [2024-10-08 18:24:43.227701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:19:30.208 [2024-10-08 18:24:43.227707] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x17fe00 00:19:30.208 [2024-10-08 18:24:43.227716] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.208 [2024-10-08 18:24:43.227724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.208 [2024-10-08 18:24:43.227742] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.208 [2024-10-08 18:24:43.227747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:19:30.208 [2024-10-08 18:24:43.227754] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x17fe00 00:19:30.208 [2024-10-08 18:24:43.227763] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.208 [2024-10-08 18:24:43.227770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.208 [2024-10-08 18:24:43.227796] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.208 [2024-10-08 18:24:43.227801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:19:30.208 [2024-10-08 18:24:43.227808] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x17fe00 00:19:30.208 [2024-10-08 18:24:43.227817] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.209 [2024-10-08 18:24:43.227825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.209 [2024-10-08 18:24:43.227845] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.209 [2024-10-08 18:24:43.227850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:19:30.209 [2024-10-08 18:24:43.227857] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x17fe00 00:19:30.209 [2024-10-08 18:24:43.227866] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.209 [2024-10-08 18:24:43.227873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.209 [2024-10-08 18:24:43.227889] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.209 [2024-10-08 18:24:43.227895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:19:30.209 [2024-10-08 18:24:43.227901] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x17fe00 00:19:30.209 [2024-10-08 18:24:43.227910] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.209 [2024-10-08 18:24:43.227918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.209 [2024-10-08 18:24:43.227936] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.209 [2024-10-08 18:24:43.227942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:19:30.209 [2024-10-08 18:24:43.227948] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x17fe00 00:19:30.209 [2024-10-08 18:24:43.227957] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.209 [2024-10-08 18:24:43.227965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.209 [2024-10-08 18:24:43.227983] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.209 [2024-10-08 18:24:43.227989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:19:30.209 [2024-10-08 18:24:43.227995] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x17fe00 00:19:30.209 [2024-10-08 18:24:43.232012] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.209 [2024-10-08 18:24:43.232021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.209 [2024-10-08 18:24:43.232043] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.209 [2024-10-08 18:24:43.232049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0008 p:0 m:0 dnr:0 00:19:30.209 [2024-10-08 18:24:43.232055] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x17fe00 00:19:30.209 [2024-10-08 18:24:43.232062] nvme_ctrlr.c:1289:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:19:30.209 128 00:19:30.209 Transport Service Identifier: 4420 00:19:30.209 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:19:30.209 Transport Address: 192.168.100.8 00:19:30.209 Transport Specific Address Subtype - RDMA 00:19:30.209 RDMA QP Service Type: 1 (Reliable Connected) 00:19:30.209 RDMA Provider Type: 1 (No provider specified) 00:19:30.209 RDMA CM Service: 1 (RDMA_CM) 00:19:30.209 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:19:30.209 [2024-10-08 18:24:43.308734] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:19:30.209 [2024-10-08 18:24:43.308777] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3457957 ] 00:19:30.209 [2024-10-08 18:24:43.356010] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:19:30.209 [2024-10-08 18:24:43.356095] nvme_rdma.c:2214:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:19:30.209 [2024-10-08 18:24:43.356111] nvme_rdma.c:1215:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:19:30.209 [2024-10-08 18:24:43.356116] nvme_rdma.c:1219:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:19:30.209 [2024-10-08 18:24:43.356142] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:19:30.209 [2024-10-08 18:24:43.366407] nvme_rdma.c: 431:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:19:30.472 [2024-10-08 18:24:43.376684] nvme_rdma.c:1101:nvme_rdma_connect_established: *DEBUG*: rc =0 00:19:30.472 [2024-10-08 18:24:43.376696] nvme_rdma.c:1106:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:19:30.472 [2024-10-08 18:24:43.376705] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x17fe00 00:19:30.472 [2024-10-08 18:24:43.376713] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x17fe00 00:19:30.472 [2024-10-08 18:24:43.376720] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x17fe00 00:19:30.472 [2024-10-08 18:24:43.376726] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x17fe00 00:19:30.472 [2024-10-08 18:24:43.376733] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x17fe00 00:19:30.472 [2024-10-08 18:24:43.376739] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x17fe00 00:19:30.472 [2024-10-08 18:24:43.376746] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x17fe00 00:19:30.472 [2024-10-08 18:24:43.376752] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x17fe00 00:19:30.472 [2024-10-08 18:24:43.376758] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x17fe00 00:19:30.473 [2024-10-08 18:24:43.376765] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x17fe00 00:19:30.473 [2024-10-08 18:24:43.376771] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x17fe00 00:19:30.473 [2024-10-08 18:24:43.376777] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7b8 length 0x10 lkey 0x17fe00 00:19:30.473 [2024-10-08 18:24:43.376784] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7e0 length 0x10 lkey 0x17fe00 00:19:30.473 [2024-10-08 18:24:43.376795] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf808 length 0x10 lkey 0x17fe00 00:19:30.473 [2024-10-08 18:24:43.376802] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf830 length 0x10 lkey 0x17fe00 00:19:30.473 [2024-10-08 18:24:43.376808] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf858 length 0x10 lkey 0x17fe00 00:19:30.473 [2024-10-08 18:24:43.376815] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf880 length 0x10 lkey 0x17fe00 00:19:30.473 [2024-10-08 18:24:43.376821] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8a8 length 0x10 lkey 0x17fe00 00:19:30.473 [2024-10-08 18:24:43.376827] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8d0 length 0x10 lkey 0x17fe00 00:19:30.473 [2024-10-08 18:24:43.376834] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8f8 length 0x10 lkey 0x17fe00 00:19:30.473 [2024-10-08 18:24:43.376840] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf920 length 0x10 lkey 0x17fe00 00:19:30.473 [2024-10-08 18:24:43.376846] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf948 length 0x10 lkey 0x17fe00 00:19:30.473 [2024-10-08 18:24:43.376853] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf970 length 0x10 lkey 0x17fe00 00:19:30.473 [2024-10-08 18:24:43.376859] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf998 length 0x10 lkey 0x17fe00 00:19:30.473 [2024-10-08 18:24:43.376865] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9c0 length 0x10 lkey 0x17fe00 00:19:30.473 [2024-10-08 18:24:43.376872] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9e8 length 0x10 lkey 0x17fe00 00:19:30.473 [2024-10-08 18:24:43.376878] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa10 length 0x10 lkey 0x17fe00 00:19:30.473 [2024-10-08 18:24:43.376885] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa38 length 0x10 lkey 0x17fe00 00:19:30.473 [2024-10-08 18:24:43.376891] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa60 length 0x10 lkey 0x17fe00 00:19:30.473 [2024-10-08 18:24:43.376897] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa88 length 0x10 lkey 0x17fe00 00:19:30.473 [2024-10-08 18:24:43.376904] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfab0 length 0x10 lkey 0x17fe00 00:19:30.473 [2024-10-08 18:24:43.376909] nvme_rdma.c:1120:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:19:30.473 [2024-10-08 18:24:43.376915] nvme_rdma.c:1123:nvme_rdma_connect_established: *DEBUG*: rc =0 00:19:30.473 [2024-10-08 18:24:43.376920] nvme_rdma.c:1128:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:19:30.473 [2024-10-08 18:24:43.376938] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x17fe00 00:19:30.473 [2024-10-08 18:24:43.376951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf180 len:0x400 key:0x17fe00 00:19:30.473 [2024-10-08 18:24:43.382004] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.473 [2024-10-08 18:24:43.382022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:19:30.473 [2024-10-08 18:24:43.382030] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x17fe00 00:19:30.473 [2024-10-08 18:24:43.382039] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:30.473 [2024-10-08 18:24:43.382046] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:19:30.473 [2024-10-08 18:24:43.382053] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:19:30.473 [2024-10-08 18:24:43.382067] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x17fe00 00:19:30.473 [2024-10-08 18:24:43.382076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.473 [2024-10-08 18:24:43.382094] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.473 [2024-10-08 18:24:43.382100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:19:30.473 [2024-10-08 18:24:43.382107] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:19:30.473 [2024-10-08 18:24:43.382114] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x17fe00 00:19:30.473 [2024-10-08 18:24:43.382121] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:19:30.473 [2024-10-08 18:24:43.382129] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x17fe00 00:19:30.473 [2024-10-08 18:24:43.382137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.473 [2024-10-08 18:24:43.382154] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.473 [2024-10-08 18:24:43.382160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:19:30.473 [2024-10-08 18:24:43.382167] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:19:30.473 [2024-10-08 18:24:43.382173] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x17fe00 00:19:30.473 [2024-10-08 18:24:43.382181] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:19:30.473 [2024-10-08 18:24:43.382188] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x17fe00 00:19:30.473 [2024-10-08 18:24:43.382196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.473 [2024-10-08 18:24:43.382215] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.473 [2024-10-08 18:24:43.382220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:30.473 [2024-10-08 18:24:43.382228] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:30.473 [2024-10-08 18:24:43.382234] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x17fe00 00:19:30.473 [2024-10-08 18:24:43.382243] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x17fe00 00:19:30.473 [2024-10-08 18:24:43.382251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.473 [2024-10-08 18:24:43.382269] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.473 [2024-10-08 18:24:43.382275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:30.473 [2024-10-08 18:24:43.382281] nvme_ctrlr.c:3924:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:19:30.473 [2024-10-08 18:24:43.382288] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:19:30.473 [2024-10-08 18:24:43.382294] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x17fe00 00:19:30.473 [2024-10-08 18:24:43.382301] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:30.473 [2024-10-08 18:24:43.382408] nvme_ctrlr.c:4122:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:19:30.473 [2024-10-08 18:24:43.382413] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:30.473 [2024-10-08 18:24:43.382424] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x17fe00 00:19:30.473 [2024-10-08 18:24:43.382432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.473 [2024-10-08 18:24:43.382453] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.473 [2024-10-08 18:24:43.382458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:30.473 [2024-10-08 18:24:43.382465] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:30.473 [2024-10-08 18:24:43.382471] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x17fe00 00:19:30.473 [2024-10-08 18:24:43.382480] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x17fe00 00:19:30.473 [2024-10-08 18:24:43.382488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.473 [2024-10-08 18:24:43.382510] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.473 [2024-10-08 18:24:43.382516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:19:30.473 [2024-10-08 18:24:43.382522] nvme_ctrlr.c:3959:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:30.473 [2024-10-08 18:24:43.382528] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:19:30.473 [2024-10-08 18:24:43.382534] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x17fe00 00:19:30.473 [2024-10-08 18:24:43.382541] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:19:30.473 [2024-10-08 18:24:43.382550] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:19:30.473 [2024-10-08 18:24:43.382560] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x17fe00 00:19:30.473 [2024-10-08 18:24:43.382568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x17fe00 00:19:30.473 [2024-10-08 18:24:43.382614] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.473 [2024-10-08 18:24:43.382620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:30.473 [2024-10-08 18:24:43.382629] nvme_ctrlr.c:2097:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:19:30.473 [2024-10-08 18:24:43.382638] nvme_ctrlr.c:2101:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:19:30.473 [2024-10-08 18:24:43.382644] nvme_ctrlr.c:2104:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:19:30.473 [2024-10-08 18:24:43.382650] nvme_ctrlr.c:2128:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:19:30.473 [2024-10-08 18:24:43.382656] nvme_ctrlr.c:2143:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:19:30.473 [2024-10-08 18:24:43.382662] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:19:30.473 [2024-10-08 18:24:43.382668] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x17fe00 00:19:30.473 [2024-10-08 18:24:43.382676] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:19:30.473 [2024-10-08 18:24:43.382684] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x17fe00 00:19:30.473 [2024-10-08 18:24:43.382694] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.474 [2024-10-08 18:24:43.382712] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.474 [2024-10-08 18:24:43.382718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:30.474 [2024-10-08 18:24:43.382727] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x17fe00 00:19:30.474 [2024-10-08 18:24:43.382734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.474 [2024-10-08 18:24:43.382741] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d05c0 length 0x40 lkey 0x17fe00 00:19:30.474 [2024-10-08 18:24:43.382748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.474 [2024-10-08 18:24:43.382755] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.474 [2024-10-08 18:24:43.382762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.474 [2024-10-08 18:24:43.382770] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0840 length 0x40 lkey 0x17fe00 00:19:30.474 [2024-10-08 18:24:43.382777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.474 [2024-10-08 18:24:43.382783] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:30.474 [2024-10-08 18:24:43.382789] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x17fe00 00:19:30.474 [2024-10-08 18:24:43.382799] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:30.474 [2024-10-08 18:24:43.382807] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x17fe00 00:19:30.474 [2024-10-08 18:24:43.382815] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.474 [2024-10-08 18:24:43.382833] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.474 [2024-10-08 18:24:43.382839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:19:30.474 [2024-10-08 18:24:43.382846] nvme_ctrlr.c:3077:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:19:30.474 [2024-10-08 18:24:43.382853] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:30.474 [2024-10-08 18:24:43.382859] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x17fe00 00:19:30.474 [2024-10-08 18:24:43.382866] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:19:30.474 [2024-10-08 18:24:43.382874] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:19:30.474 [2024-10-08 18:24:43.382882] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x17fe00 00:19:30.474 [2024-10-08 18:24:43.382889] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.474 [2024-10-08 18:24:43.382914] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.474 [2024-10-08 18:24:43.382920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:19:30.474 [2024-10-08 18:24:43.382973] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:19:30.474 [2024-10-08 18:24:43.382980] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x17fe00 00:19:30.474 [2024-10-08 18:24:43.382988] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:19:30.474 [2024-10-08 18:24:43.383002] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x17fe00 00:19:30.474 [2024-10-08 18:24:43.383010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x17fe00 00:19:30.474 [2024-10-08 18:24:43.383041] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.474 [2024-10-08 18:24:43.383047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:30.474 [2024-10-08 18:24:43.383059] nvme_ctrlr.c:4753:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:19:30.474 [2024-10-08 18:24:43.383069] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:19:30.474 [2024-10-08 18:24:43.383076] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b8 length 0x10 lkey 0x17fe00 00:19:30.474 [2024-10-08 18:24:43.383084] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:19:30.474 [2024-10-08 18:24:43.383093] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x17fe00 00:19:30.474 [2024-10-08 18:24:43.383101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x17fe00 00:19:30.474 [2024-10-08 18:24:43.383135] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.474 [2024-10-08 18:24:43.383141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:30.474 [2024-10-08 18:24:43.383156] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:30.474 [2024-10-08 18:24:43.383162] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e0 length 0x10 lkey 0x17fe00 00:19:30.474 [2024-10-08 18:24:43.383170] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:30.474 [2024-10-08 18:24:43.383179] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x17fe00 00:19:30.474 [2024-10-08 18:24:43.383187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x17fe00 00:19:30.474 [2024-10-08 18:24:43.383217] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.474 [2024-10-08 18:24:43.383223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:30.474 [2024-10-08 18:24:43.383232] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:30.474 [2024-10-08 18:24:43.383238] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf808 length 0x10 lkey 0x17fe00 00:19:30.474 [2024-10-08 18:24:43.383246] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:19:30.474 [2024-10-08 18:24:43.383257] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:19:30.474 [2024-10-08 18:24:43.383264] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:19:30.474 [2024-10-08 18:24:43.383274] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:30.474 [2024-10-08 18:24:43.383281] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:19:30.474 [2024-10-08 18:24:43.383288] nvme_ctrlr.c:3165:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:19:30.474 [2024-10-08 18:24:43.383294] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:19:30.474 [2024-10-08 18:24:43.383300] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:19:30.474 [2024-10-08 18:24:43.383317] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x17fe00 00:19:30.474 [2024-10-08 18:24:43.383325] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.474 [2024-10-08 18:24:43.383333] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x17fe00 00:19:30.474 [2024-10-08 18:24:43.383340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.474 [2024-10-08 18:24:43.383352] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.474 [2024-10-08 18:24:43.383358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:30.474 [2024-10-08 18:24:43.383364] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf830 length 0x10 lkey 0x17fe00 00:19:30.474 [2024-10-08 18:24:43.383371] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.474 [2024-10-08 18:24:43.383376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:30.474 [2024-10-08 18:24:43.383383] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf858 length 0x10 lkey 0x17fe00 00:19:30.474 [2024-10-08 18:24:43.383393] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x17fe00 00:19:30.474 [2024-10-08 18:24:43.383400] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.474 [2024-10-08 18:24:43.383421] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.474 [2024-10-08 18:24:43.383427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:30.474 [2024-10-08 18:24:43.383433] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf880 length 0x10 lkey 0x17fe00 00:19:30.474 [2024-10-08 18:24:43.383442] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x17fe00 00:19:30.474 [2024-10-08 18:24:43.383450] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.474 [2024-10-08 18:24:43.383467] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.474 [2024-10-08 18:24:43.383473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:30.474 [2024-10-08 18:24:43.383480] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a8 length 0x10 lkey 0x17fe00 00:19:30.474 [2024-10-08 18:24:43.383489] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x17fe00 00:19:30.474 [2024-10-08 18:24:43.383497] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.474 [2024-10-08 18:24:43.383516] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.474 [2024-10-08 18:24:43.383522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:19:30.474 [2024-10-08 18:24:43.383528] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d0 length 0x10 lkey 0x17fe00 00:19:30.474 [2024-10-08 18:24:43.383543] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x17fe00 00:19:30.474 [2024-10-08 18:24:43.383551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x17fe00 00:19:30.474 [2024-10-08 18:24:43.383560] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x17fe00 00:19:30.474 [2024-10-08 18:24:43.383568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x17fe00 00:19:30.474 [2024-10-08 18:24:43.383576] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0ac0 length 0x40 lkey 0x17fe00 00:19:30.475 [2024-10-08 18:24:43.383584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x17fe00 00:19:30.475 [2024-10-08 18:24:43.383593] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c00 length 0x40 lkey 0x17fe00 00:19:30.475 [2024-10-08 18:24:43.383600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x17fe00 00:19:30.475 [2024-10-08 18:24:43.383609] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.475 [2024-10-08 18:24:43.383615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:30.475 [2024-10-08 18:24:43.383626] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f8 length 0x10 lkey 0x17fe00 00:19:30.475 [2024-10-08 18:24:43.383633] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.475 [2024-10-08 18:24:43.383639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:30.475 [2024-10-08 18:24:43.383650] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf920 length 0x10 lkey 0x17fe00 00:19:30.475 [2024-10-08 18:24:43.383656] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.475 [2024-10-08 18:24:43.383662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:30.475 [2024-10-08 18:24:43.383669] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf948 length 0x10 lkey 0x17fe00 00:19:30.475 [2024-10-08 18:24:43.383675] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.475 [2024-10-08 18:24:43.383681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:30.475 [2024-10-08 18:24:43.383690] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf970 length 0x10 lkey 0x17fe00 00:19:30.475 ===================================================== 00:19:30.475 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:19:30.475 ===================================================== 00:19:30.475 Controller Capabilities/Features 00:19:30.475 ================================ 00:19:30.475 Vendor ID: 8086 00:19:30.475 Subsystem Vendor ID: 8086 00:19:30.475 Serial Number: SPDK00000000000001 00:19:30.475 Model Number: SPDK bdev Controller 00:19:30.475 Firmware Version: 25.01 00:19:30.475 Recommended Arb Burst: 6 00:19:30.475 IEEE OUI Identifier: e4 d2 5c 00:19:30.475 Multi-path I/O 00:19:30.475 May have multiple subsystem ports: Yes 00:19:30.475 May have multiple controllers: Yes 00:19:30.475 Associated with SR-IOV VF: No 00:19:30.475 Max Data Transfer Size: 131072 00:19:30.475 Max Number of Namespaces: 32 00:19:30.475 Max Number of I/O Queues: 127 00:19:30.475 NVMe Specification Version (VS): 1.3 00:19:30.475 NVMe Specification Version (Identify): 1.3 00:19:30.475 Maximum Queue Entries: 128 00:19:30.475 Contiguous Queues Required: Yes 00:19:30.475 Arbitration Mechanisms Supported 00:19:30.475 Weighted Round Robin: Not Supported 00:19:30.475 Vendor Specific: Not Supported 00:19:30.475 Reset Timeout: 15000 ms 00:19:30.475 Doorbell Stride: 4 bytes 00:19:30.475 NVM Subsystem Reset: Not Supported 00:19:30.475 Command Sets Supported 00:19:30.475 NVM Command Set: Supported 00:19:30.475 Boot Partition: Not Supported 00:19:30.475 Memory Page Size Minimum: 4096 bytes 00:19:30.475 Memory Page Size Maximum: 4096 bytes 00:19:30.475 Persistent Memory Region: Not Supported 00:19:30.475 Optional Asynchronous Events Supported 00:19:30.475 Namespace Attribute Notices: Supported 00:19:30.475 Firmware Activation Notices: Not Supported 00:19:30.475 ANA Change Notices: Not Supported 00:19:30.475 PLE Aggregate Log Change Notices: Not Supported 00:19:30.475 LBA Status Info Alert Notices: Not Supported 00:19:30.475 EGE Aggregate Log Change Notices: Not Supported 00:19:30.475 Normal NVM Subsystem Shutdown event: Not Supported 00:19:30.475 Zone Descriptor Change Notices: Not Supported 00:19:30.475 Discovery Log Change Notices: Not Supported 00:19:30.475 Controller Attributes 00:19:30.475 128-bit Host Identifier: Supported 00:19:30.475 Non-Operational Permissive Mode: Not Supported 00:19:30.475 NVM Sets: Not Supported 00:19:30.475 Read Recovery Levels: Not Supported 00:19:30.475 Endurance Groups: Not Supported 00:19:30.475 Predictable Latency Mode: Not Supported 00:19:30.475 Traffic Based Keep ALive: Not Supported 00:19:30.475 Namespace Granularity: Not Supported 00:19:30.475 SQ Associations: Not Supported 00:19:30.475 UUID List: Not Supported 00:19:30.475 Multi-Domain Subsystem: Not Supported 00:19:30.475 Fixed Capacity Management: Not Supported 00:19:30.475 Variable Capacity Management: Not Supported 00:19:30.475 Delete Endurance Group: Not Supported 00:19:30.475 Delete NVM Set: Not Supported 00:19:30.475 Extended LBA Formats Supported: Not Supported 00:19:30.475 Flexible Data Placement Supported: Not Supported 00:19:30.475 00:19:30.475 Controller Memory Buffer Support 00:19:30.475 ================================ 00:19:30.475 Supported: No 00:19:30.475 00:19:30.475 Persistent Memory Region Support 00:19:30.475 ================================ 00:19:30.475 Supported: No 00:19:30.475 00:19:30.475 Admin Command Set Attributes 00:19:30.475 ============================ 00:19:30.475 Security Send/Receive: Not Supported 00:19:30.475 Format NVM: Not Supported 00:19:30.475 Firmware Activate/Download: Not Supported 00:19:30.475 Namespace Management: Not Supported 00:19:30.475 Device Self-Test: Not Supported 00:19:30.475 Directives: Not Supported 00:19:30.475 NVMe-MI: Not Supported 00:19:30.475 Virtualization Management: Not Supported 00:19:30.475 Doorbell Buffer Config: Not Supported 00:19:30.475 Get LBA Status Capability: Not Supported 00:19:30.475 Command & Feature Lockdown Capability: Not Supported 00:19:30.475 Abort Command Limit: 4 00:19:30.475 Async Event Request Limit: 4 00:19:30.475 Number of Firmware Slots: N/A 00:19:30.475 Firmware Slot 1 Read-Only: N/A 00:19:30.475 Firmware Activation Without Reset: N/A 00:19:30.475 Multiple Update Detection Support: N/A 00:19:30.475 Firmware Update Granularity: No Information Provided 00:19:30.475 Per-Namespace SMART Log: No 00:19:30.475 Asymmetric Namespace Access Log Page: Not Supported 00:19:30.475 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:19:30.475 Command Effects Log Page: Supported 00:19:30.475 Get Log Page Extended Data: Supported 00:19:30.475 Telemetry Log Pages: Not Supported 00:19:30.475 Persistent Event Log Pages: Not Supported 00:19:30.475 Supported Log Pages Log Page: May Support 00:19:30.475 Commands Supported & Effects Log Page: Not Supported 00:19:30.475 Feature Identifiers & Effects Log Page:May Support 00:19:30.475 NVMe-MI Commands & Effects Log Page: May Support 00:19:30.475 Data Area 4 for Telemetry Log: Not Supported 00:19:30.475 Error Log Page Entries Supported: 128 00:19:30.475 Keep Alive: Supported 00:19:30.475 Keep Alive Granularity: 10000 ms 00:19:30.475 00:19:30.475 NVM Command Set Attributes 00:19:30.475 ========================== 00:19:30.475 Submission Queue Entry Size 00:19:30.475 Max: 64 00:19:30.475 Min: 64 00:19:30.475 Completion Queue Entry Size 00:19:30.475 Max: 16 00:19:30.475 Min: 16 00:19:30.475 Number of Namespaces: 32 00:19:30.475 Compare Command: Supported 00:19:30.475 Write Uncorrectable Command: Not Supported 00:19:30.475 Dataset Management Command: Supported 00:19:30.475 Write Zeroes Command: Supported 00:19:30.475 Set Features Save Field: Not Supported 00:19:30.475 Reservations: Supported 00:19:30.475 Timestamp: Not Supported 00:19:30.475 Copy: Supported 00:19:30.475 Volatile Write Cache: Present 00:19:30.475 Atomic Write Unit (Normal): 1 00:19:30.475 Atomic Write Unit (PFail): 1 00:19:30.475 Atomic Compare & Write Unit: 1 00:19:30.475 Fused Compare & Write: Supported 00:19:30.475 Scatter-Gather List 00:19:30.475 SGL Command Set: Supported 00:19:30.475 SGL Keyed: Supported 00:19:30.475 SGL Bit Bucket Descriptor: Not Supported 00:19:30.475 SGL Metadata Pointer: Not Supported 00:19:30.475 Oversized SGL: Not Supported 00:19:30.475 SGL Metadata Address: Not Supported 00:19:30.475 SGL Offset: Supported 00:19:30.475 Transport SGL Data Block: Not Supported 00:19:30.475 Replay Protected Memory Block: Not Supported 00:19:30.475 00:19:30.475 Firmware Slot Information 00:19:30.475 ========================= 00:19:30.475 Active slot: 1 00:19:30.475 Slot 1 Firmware Revision: 25.01 00:19:30.475 00:19:30.475 00:19:30.475 Commands Supported and Effects 00:19:30.475 ============================== 00:19:30.475 Admin Commands 00:19:30.475 -------------- 00:19:30.475 Get Log Page (02h): Supported 00:19:30.475 Identify (06h): Supported 00:19:30.475 Abort (08h): Supported 00:19:30.475 Set Features (09h): Supported 00:19:30.475 Get Features (0Ah): Supported 00:19:30.475 Asynchronous Event Request (0Ch): Supported 00:19:30.475 Keep Alive (18h): Supported 00:19:30.475 I/O Commands 00:19:30.475 ------------ 00:19:30.475 Flush (00h): Supported LBA-Change 00:19:30.475 Write (01h): Supported LBA-Change 00:19:30.475 Read (02h): Supported 00:19:30.475 Compare (05h): Supported 00:19:30.475 Write Zeroes (08h): Supported LBA-Change 00:19:30.475 Dataset Management (09h): Supported LBA-Change 00:19:30.475 Copy (19h): Supported LBA-Change 00:19:30.475 00:19:30.475 Error Log 00:19:30.475 ========= 00:19:30.475 00:19:30.475 Arbitration 00:19:30.475 =========== 00:19:30.475 Arbitration Burst: 1 00:19:30.475 00:19:30.475 Power Management 00:19:30.475 ================ 00:19:30.475 Number of Power States: 1 00:19:30.475 Current Power State: Power State #0 00:19:30.475 Power State #0: 00:19:30.476 Max Power: 0.00 W 00:19:30.476 Non-Operational State: Operational 00:19:30.476 Entry Latency: Not Reported 00:19:30.476 Exit Latency: Not Reported 00:19:30.476 Relative Read Throughput: 0 00:19:30.476 Relative Read Latency: 0 00:19:30.476 Relative Write Throughput: 0 00:19:30.476 Relative Write Latency: 0 00:19:30.476 Idle Power: Not Reported 00:19:30.476 Active Power: Not Reported 00:19:30.476 Non-Operational Permissive Mode: Not Supported 00:19:30.476 00:19:30.476 Health Information 00:19:30.476 ================== 00:19:30.476 Critical Warnings: 00:19:30.476 Available Spare Space: OK 00:19:30.476 Temperature: OK 00:19:30.476 Device Reliability: OK 00:19:30.476 Read Only: No 00:19:30.476 Volatile Memory Backup: OK 00:19:30.476 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:30.476 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:30.476 Available Spare: 0% 00:19:30.476 Available Spare Threshold: 0% 00:19:30.476 Life Percentage [2024-10-08 18:24:43.383775] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c00 length 0x40 lkey 0x17fe00 00:19:30.476 [2024-10-08 18:24:43.383784] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.476 [2024-10-08 18:24:43.383802] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.476 [2024-10-08 18:24:43.383808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:30.476 [2024-10-08 18:24:43.383814] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf998 length 0x10 lkey 0x17fe00 00:19:30.476 [2024-10-08 18:24:43.383849] nvme_ctrlr.c:4417:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:19:30.476 [2024-10-08 18:24:43.383859] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 24259 doesn't match qid 00:19:30.476 [2024-10-08 18:24:43.383874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32732 cdw0:5 sqhd:56b0 p:0 m:0 dnr:0 00:19:30.476 [2024-10-08 18:24:43.383881] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 24259 doesn't match qid 00:19:30.476 [2024-10-08 18:24:43.383889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32732 cdw0:5 sqhd:56b0 p:0 m:0 dnr:0 00:19:30.476 [2024-10-08 18:24:43.383896] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 24259 doesn't match qid 00:19:30.476 [2024-10-08 18:24:43.383904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32732 cdw0:5 sqhd:56b0 p:0 m:0 dnr:0 00:19:30.476 [2024-10-08 18:24:43.383910] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 24259 doesn't match qid 00:19:30.476 [2024-10-08 18:24:43.383918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32732 cdw0:5 sqhd:56b0 p:0 m:0 dnr:0 00:19:30.476 [2024-10-08 18:24:43.383927] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0840 length 0x40 lkey 0x17fe00 00:19:30.476 [2024-10-08 18:24:43.383935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.476 [2024-10-08 18:24:43.383956] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.476 [2024-10-08 18:24:43.383962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:19:30.476 [2024-10-08 18:24:43.383971] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.476 [2024-10-08 18:24:43.383979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.476 [2024-10-08 18:24:43.383985] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c0 length 0x10 lkey 0x17fe00 00:19:30.476 [2024-10-08 18:24:43.384004] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.476 [2024-10-08 18:24:43.384010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:30.476 [2024-10-08 18:24:43.384016] nvme_ctrlr.c:1167:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:19:30.476 [2024-10-08 18:24:43.384023] nvme_ctrlr.c:1170:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:19:30.476 [2024-10-08 18:24:43.384029] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e8 length 0x10 lkey 0x17fe00 00:19:30.476 [2024-10-08 18:24:43.384038] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.476 [2024-10-08 18:24:43.384046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.476 [2024-10-08 18:24:43.384072] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.476 [2024-10-08 18:24:43.384078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:19:30.476 [2024-10-08 18:24:43.384085] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa10 length 0x10 lkey 0x17fe00 00:19:30.476 [2024-10-08 18:24:43.384094] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.476 [2024-10-08 18:24:43.384102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.476 [2024-10-08 18:24:43.384121] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.476 [2024-10-08 18:24:43.384127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:19:30.476 [2024-10-08 18:24:43.384135] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa38 length 0x10 lkey 0x17fe00 00:19:30.476 [2024-10-08 18:24:43.384144] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.476 [2024-10-08 18:24:43.384152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.476 [2024-10-08 18:24:43.384168] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.476 [2024-10-08 18:24:43.384174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:19:30.476 [2024-10-08 18:24:43.384181] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa60 length 0x10 lkey 0x17fe00 00:19:30.476 [2024-10-08 18:24:43.384190] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.476 [2024-10-08 18:24:43.384198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.476 [2024-10-08 18:24:43.384212] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.476 [2024-10-08 18:24:43.384218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:19:30.476 [2024-10-08 18:24:43.384225] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa88 length 0x10 lkey 0x17fe00 00:19:30.476 [2024-10-08 18:24:43.384234] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.476 [2024-10-08 18:24:43.384242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.476 [2024-10-08 18:24:43.384257] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.476 [2024-10-08 18:24:43.384263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:19:30.476 [2024-10-08 18:24:43.384269] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab0 length 0x10 lkey 0x17fe00 00:19:30.476 [2024-10-08 18:24:43.384279] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.476 [2024-10-08 18:24:43.384287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.476 [2024-10-08 18:24:43.384307] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.476 [2024-10-08 18:24:43.384313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:19:30.476 [2024-10-08 18:24:43.384320] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x17fe00 00:19:30.476 [2024-10-08 18:24:43.384329] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.476 [2024-10-08 18:24:43.384337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.476 [2024-10-08 18:24:43.384357] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.476 [2024-10-08 18:24:43.384363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:19:30.476 [2024-10-08 18:24:43.384370] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x17fe00 00:19:30.476 [2024-10-08 18:24:43.384379] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.476 [2024-10-08 18:24:43.384386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.476 [2024-10-08 18:24:43.384405] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.476 [2024-10-08 18:24:43.384412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:19:30.476 [2024-10-08 18:24:43.384419] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x17fe00 00:19:30.476 [2024-10-08 18:24:43.384428] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.476 [2024-10-08 18:24:43.384436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.476 [2024-10-08 18:24:43.384456] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.476 [2024-10-08 18:24:43.384462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:19:30.476 [2024-10-08 18:24:43.384468] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x17fe00 00:19:30.476 [2024-10-08 18:24:43.384477] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.476 [2024-10-08 18:24:43.384485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.476 [2024-10-08 18:24:43.384499] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.476 [2024-10-08 18:24:43.384505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:19:30.476 [2024-10-08 18:24:43.384511] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x17fe00 00:19:30.476 [2024-10-08 18:24:43.384520] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.476 [2024-10-08 18:24:43.384528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.476 [2024-10-08 18:24:43.384546] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.476 [2024-10-08 18:24:43.384552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:19:30.476 [2024-10-08 18:24:43.384559] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x17fe00 00:19:30.476 [2024-10-08 18:24:43.384568] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.476 [2024-10-08 18:24:43.384575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.476 [2024-10-08 18:24:43.384597] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.476 [2024-10-08 18:24:43.384603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:19:30.477 [2024-10-08 18:24:43.384610] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x17fe00 00:19:30.477 [2024-10-08 18:24:43.384619] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.477 [2024-10-08 18:24:43.384627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.477 [2024-10-08 18:24:43.384643] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.477 [2024-10-08 18:24:43.384648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:19:30.477 [2024-10-08 18:24:43.384655] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x17fe00 00:19:30.477 [2024-10-08 18:24:43.384664] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.477 [2024-10-08 18:24:43.384672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.477 [2024-10-08 18:24:43.384686] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.477 [2024-10-08 18:24:43.384693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:19:30.477 [2024-10-08 18:24:43.384700] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x17fe00 00:19:30.477 [2024-10-08 18:24:43.384709] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.477 [2024-10-08 18:24:43.384717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.477 [2024-10-08 18:24:43.384737] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.477 [2024-10-08 18:24:43.384742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:19:30.477 [2024-10-08 18:24:43.384749] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x17fe00 00:19:30.477 [2024-10-08 18:24:43.384758] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.477 [2024-10-08 18:24:43.384766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.477 [2024-10-08 18:24:43.384782] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.477 [2024-10-08 18:24:43.384788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:19:30.477 [2024-10-08 18:24:43.384796] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x17fe00 00:19:30.477 [2024-10-08 18:24:43.384805] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.477 [2024-10-08 18:24:43.384814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.477 [2024-10-08 18:24:43.384837] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.477 [2024-10-08 18:24:43.384843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:19:30.477 [2024-10-08 18:24:43.384850] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b8 length 0x10 lkey 0x17fe00 00:19:30.477 [2024-10-08 18:24:43.384859] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.477 [2024-10-08 18:24:43.384866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.477 [2024-10-08 18:24:43.384881] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.477 [2024-10-08 18:24:43.384886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:19:30.477 [2024-10-08 18:24:43.384893] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e0 length 0x10 lkey 0x17fe00 00:19:30.477 [2024-10-08 18:24:43.384902] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.477 [2024-10-08 18:24:43.384910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.477 [2024-10-08 18:24:43.384928] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.477 [2024-10-08 18:24:43.384933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:19:30.477 [2024-10-08 18:24:43.384940] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf808 length 0x10 lkey 0x17fe00 00:19:30.477 [2024-10-08 18:24:43.384949] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.477 [2024-10-08 18:24:43.384957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.477 [2024-10-08 18:24:43.384976] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.477 [2024-10-08 18:24:43.384983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:19:30.477 [2024-10-08 18:24:43.384989] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf830 length 0x10 lkey 0x17fe00 00:19:30.477 [2024-10-08 18:24:43.385004] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.477 [2024-10-08 18:24:43.385013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.477 [2024-10-08 18:24:43.385029] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.477 [2024-10-08 18:24:43.385035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:19:30.477 [2024-10-08 18:24:43.385042] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf858 length 0x10 lkey 0x17fe00 00:19:30.477 [2024-10-08 18:24:43.385051] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.477 [2024-10-08 18:24:43.385059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.477 [2024-10-08 18:24:43.385077] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.477 [2024-10-08 18:24:43.385083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:19:30.477 [2024-10-08 18:24:43.385089] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf880 length 0x10 lkey 0x17fe00 00:19:30.477 [2024-10-08 18:24:43.385098] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.477 [2024-10-08 18:24:43.385106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.477 [2024-10-08 18:24:43.385128] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.477 [2024-10-08 18:24:43.385134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:19:30.477 [2024-10-08 18:24:43.385140] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a8 length 0x10 lkey 0x17fe00 00:19:30.477 [2024-10-08 18:24:43.385149] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.477 [2024-10-08 18:24:43.385157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.477 [2024-10-08 18:24:43.385171] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.477 [2024-10-08 18:24:43.385177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:19:30.477 [2024-10-08 18:24:43.385184] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d0 length 0x10 lkey 0x17fe00 00:19:30.477 [2024-10-08 18:24:43.385192] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.477 [2024-10-08 18:24:43.385200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.477 [2024-10-08 18:24:43.385220] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.477 [2024-10-08 18:24:43.385226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:19:30.477 [2024-10-08 18:24:43.385233] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f8 length 0x10 lkey 0x17fe00 00:19:30.477 [2024-10-08 18:24:43.385242] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.477 [2024-10-08 18:24:43.385249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.477 [2024-10-08 18:24:43.385269] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.477 [2024-10-08 18:24:43.385275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:19:30.477 [2024-10-08 18:24:43.385282] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf920 length 0x10 lkey 0x17fe00 00:19:30.477 [2024-10-08 18:24:43.385291] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.477 [2024-10-08 18:24:43.385299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.477 [2024-10-08 18:24:43.385321] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.477 [2024-10-08 18:24:43.385326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:19:30.477 [2024-10-08 18:24:43.385333] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf948 length 0x10 lkey 0x17fe00 00:19:30.477 [2024-10-08 18:24:43.385342] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.477 [2024-10-08 18:24:43.385350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.477 [2024-10-08 18:24:43.385368] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.477 [2024-10-08 18:24:43.385373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:19:30.478 [2024-10-08 18:24:43.385380] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf970 length 0x10 lkey 0x17fe00 00:19:30.478 [2024-10-08 18:24:43.385389] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.478 [2024-10-08 18:24:43.385397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.478 [2024-10-08 18:24:43.385413] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.478 [2024-10-08 18:24:43.385419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:19:30.478 [2024-10-08 18:24:43.385425] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf998 length 0x10 lkey 0x17fe00 00:19:30.478 [2024-10-08 18:24:43.385434] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.478 [2024-10-08 18:24:43.385442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.478 [2024-10-08 18:24:43.385458] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.478 [2024-10-08 18:24:43.385464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:19:30.478 [2024-10-08 18:24:43.385470] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c0 length 0x10 lkey 0x17fe00 00:19:30.478 [2024-10-08 18:24:43.385479] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.478 [2024-10-08 18:24:43.385487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.478 [2024-10-08 18:24:43.385505] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.478 [2024-10-08 18:24:43.385511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:19:30.478 [2024-10-08 18:24:43.385517] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e8 length 0x10 lkey 0x17fe00 00:19:30.478 [2024-10-08 18:24:43.385526] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.478 [2024-10-08 18:24:43.385535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.478 [2024-10-08 18:24:43.385555] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.478 [2024-10-08 18:24:43.385561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:19:30.478 [2024-10-08 18:24:43.385568] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa10 length 0x10 lkey 0x17fe00 00:19:30.478 [2024-10-08 18:24:43.385577] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.478 [2024-10-08 18:24:43.385585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.478 [2024-10-08 18:24:43.385601] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.478 [2024-10-08 18:24:43.385606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:19:30.478 [2024-10-08 18:24:43.385613] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa38 length 0x10 lkey 0x17fe00 00:19:30.478 [2024-10-08 18:24:43.385622] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.478 [2024-10-08 18:24:43.385630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.478 [2024-10-08 18:24:43.385650] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.478 [2024-10-08 18:24:43.385655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:19:30.478 [2024-10-08 18:24:43.385662] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa60 length 0x10 lkey 0x17fe00 00:19:30.478 [2024-10-08 18:24:43.385671] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.478 [2024-10-08 18:24:43.385679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.478 [2024-10-08 18:24:43.385702] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.478 [2024-10-08 18:24:43.385708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:19:30.478 [2024-10-08 18:24:43.385715] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa88 length 0x10 lkey 0x17fe00 00:19:30.478 [2024-10-08 18:24:43.385724] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.478 [2024-10-08 18:24:43.385731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.478 [2024-10-08 18:24:43.385747] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.478 [2024-10-08 18:24:43.385753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:19:30.478 [2024-10-08 18:24:43.385760] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab0 length 0x10 lkey 0x17fe00 00:19:30.478 [2024-10-08 18:24:43.385769] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.478 [2024-10-08 18:24:43.385776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.478 [2024-10-08 18:24:43.385794] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.478 [2024-10-08 18:24:43.385800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:19:30.478 [2024-10-08 18:24:43.385807] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x17fe00 00:19:30.478 [2024-10-08 18:24:43.385816] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.478 [2024-10-08 18:24:43.385825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.478 [2024-10-08 18:24:43.385841] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.478 [2024-10-08 18:24:43.385847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:19:30.478 [2024-10-08 18:24:43.385853] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x17fe00 00:19:30.478 [2024-10-08 18:24:43.385862] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.478 [2024-10-08 18:24:43.385870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.478 [2024-10-08 18:24:43.385890] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.478 [2024-10-08 18:24:43.385896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:19:30.478 [2024-10-08 18:24:43.385902] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x17fe00 00:19:30.478 [2024-10-08 18:24:43.385911] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.478 [2024-10-08 18:24:43.385919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.478 [2024-10-08 18:24:43.385937] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.478 [2024-10-08 18:24:43.385943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:19:30.478 [2024-10-08 18:24:43.385949] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x17fe00 00:19:30.478 [2024-10-08 18:24:43.385958] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.478 [2024-10-08 18:24:43.385966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.478 [2024-10-08 18:24:43.385986] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.478 [2024-10-08 18:24:43.385992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:19:30.478 [2024-10-08 18:24:43.390034] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x17fe00 00:19:30.478 [2024-10-08 18:24:43.390046] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x17fe00 00:19:30.478 [2024-10-08 18:24:43.390055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:30.478 [2024-10-08 18:24:43.390075] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:30.478 [2024-10-08 18:24:43.390081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0004 p:0 m:0 dnr:0 00:19:30.478 [2024-10-08 18:24:43.390088] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x17fe00 00:19:30.478 [2024-10-08 18:24:43.390095] nvme_ctrlr.c:1289:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:19:30.478 Used: 0% 00:19:30.478 Data Units Read: 0 00:19:30.478 Data Units Written: 0 00:19:30.478 Host Read Commands: 0 00:19:30.478 Host Write Commands: 0 00:19:30.478 Controller Busy Time: 0 minutes 00:19:30.478 Power Cycles: 0 00:19:30.478 Power On Hours: 0 hours 00:19:30.478 Unsafe Shutdowns: 0 00:19:30.478 Unrecoverable Media Errors: 0 00:19:30.478 Lifetime Error Log Entries: 0 00:19:30.478 Warning Temperature Time: 0 minutes 00:19:30.478 Critical Temperature Time: 0 minutes 00:19:30.478 00:19:30.478 Number of Queues 00:19:30.478 ================ 00:19:30.478 Number of I/O Submission Queues: 127 00:19:30.478 Number of I/O Completion Queues: 127 00:19:30.478 00:19:30.478 Active Namespaces 00:19:30.478 ================= 00:19:30.478 Namespace ID:1 00:19:30.478 Error Recovery Timeout: Unlimited 00:19:30.478 Command Set Identifier: NVM (00h) 00:19:30.478 Deallocate: Supported 00:19:30.478 Deallocated/Unwritten Error: Not Supported 00:19:30.478 Deallocated Read Value: Unknown 00:19:30.478 Deallocate in Write Zeroes: Not Supported 00:19:30.478 Deallocated Guard Field: 0xFFFF 00:19:30.478 Flush: Supported 00:19:30.478 Reservation: Supported 00:19:30.478 Namespace Sharing Capabilities: Multiple Controllers 00:19:30.478 Size (in LBAs): 131072 (0GiB) 00:19:30.478 Capacity (in LBAs): 131072 (0GiB) 00:19:30.478 Utilization (in LBAs): 131072 (0GiB) 00:19:30.478 NGUID: ABCDEF0123456789ABCDEF0123456789 00:19:30.478 EUI64: ABCDEF0123456789 00:19:30.478 UUID: 4dc85491-75d8-43e4-bcac-fd973b919402 00:19:30.478 Thin Provisioning: Not Supported 00:19:30.478 Per-NS Atomic Units: Yes 00:19:30.478 Atomic Boundary Size (Normal): 0 00:19:30.478 Atomic Boundary Size (PFail): 0 00:19:30.478 Atomic Boundary Offset: 0 00:19:30.478 Maximum Single Source Range Length: 65535 00:19:30.478 Maximum Copy Length: 65535 00:19:30.478 Maximum Source Range Count: 1 00:19:30.478 NGUID/EUI64 Never Reused: No 00:19:30.478 Namespace Write Protected: No 00:19:30.478 Number of LBA Formats: 1 00:19:30.478 Current LBA Format: LBA Format #00 00:19:30.478 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:30.478 00:19:30.478 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:19:30.478 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:30.479 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.479 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:30.479 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.479 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:19:30.479 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:19:30.479 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:30.479 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:19:30.479 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:30.479 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:30.479 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:19:30.479 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:30.479 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:30.479 rmmod nvme_rdma 00:19:30.479 rmmod nvme_fabrics 00:19:30.479 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:30.479 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:19:30.479 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:19:30.479 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 3457755 ']' 00:19:30.479 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 3457755 00:19:30.479 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 3457755 ']' 00:19:30.479 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 3457755 00:19:30.479 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:19:30.479 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:30.479 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3457755 00:19:30.479 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:30.479 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:30.479 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3457755' 00:19:30.479 killing process with pid 3457755 00:19:30.479 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 3457755 00:19:30.479 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 3457755 00:19:30.738 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:30.738 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:19:30.738 00:19:30.738 real 0m9.131s 00:19:30.738 user 0m9.007s 00:19:30.738 sys 0m5.877s 00:19:30.738 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:30.738 18:24:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:30.738 ************************************ 00:19:30.738 END TEST nvmf_identify 00:19:30.738 ************************************ 00:19:30.738 18:24:43 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:19:30.738 18:24:43 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:30.738 18:24:43 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:30.738 18:24:43 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.999 ************************************ 00:19:30.999 START TEST nvmf_perf 00:19:30.999 ************************************ 00:19:30.999 18:24:43 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:19:30.999 * Looking for test storage... 00:19:30.999 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:30.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.999 --rc genhtml_branch_coverage=1 00:19:30.999 --rc genhtml_function_coverage=1 00:19:30.999 --rc genhtml_legend=1 00:19:30.999 --rc geninfo_all_blocks=1 00:19:30.999 --rc geninfo_unexecuted_blocks=1 00:19:30.999 00:19:30.999 ' 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:30.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.999 --rc genhtml_branch_coverage=1 00:19:30.999 --rc genhtml_function_coverage=1 00:19:30.999 --rc genhtml_legend=1 00:19:30.999 --rc geninfo_all_blocks=1 00:19:30.999 --rc geninfo_unexecuted_blocks=1 00:19:30.999 00:19:30.999 ' 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:30.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.999 --rc genhtml_branch_coverage=1 00:19:30.999 --rc genhtml_function_coverage=1 00:19:30.999 --rc genhtml_legend=1 00:19:30.999 --rc geninfo_all_blocks=1 00:19:30.999 --rc geninfo_unexecuted_blocks=1 00:19:30.999 00:19:30.999 ' 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:30.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.999 --rc genhtml_branch_coverage=1 00:19:30.999 --rc genhtml_function_coverage=1 00:19:30.999 --rc genhtml_legend=1 00:19:30.999 --rc geninfo_all_blocks=1 00:19:30.999 --rc geninfo_unexecuted_blocks=1 00:19:30.999 00:19:30.999 ' 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:30.999 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:31.000 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:31.000 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:31.000 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:31.000 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:19:31.000 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:19:31.000 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:31.000 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:31.000 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:31.000 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:31.000 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:31.000 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:31.000 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:31.000 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:31.000 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:31.000 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.000 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.000 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.000 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:19:31.000 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.000 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:19:31.000 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:31.000 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:31.000 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:31.000 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:31.000 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:31.000 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:31.000 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:31.000 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:31.000 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:31.000 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:31.260 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:31.260 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:31.260 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:31.260 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:19:31.260 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:19:31.260 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:31.260 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:31.260 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:31.260 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:31.260 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.260 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:31.260 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.260 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:19:31.260 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:19:31.260 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:19:31.260 18:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:19:37.835 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:19:37.835 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:19:37.835 Found net devices under 0000:18:00.0: mlx_0_0 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:19:37.835 Found net devices under 0000:18:00.1: mlx_0_1 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # is_hw=yes 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # rdma_device_init 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # uname 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@528 -- # allocate_nic_ips 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:37.835 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:37.836 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:19:37.836 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:37.836 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:37.836 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:37.836 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:37.836 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:37.836 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:37.836 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:19:37.836 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:37.836 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:37.836 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:37.836 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:37.836 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:37.836 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:37.836 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:37.836 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:37.836 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:37.836 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:37.836 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:19:37.836 altname enp24s0f0np0 00:19:37.836 altname ens785f0np0 00:19:37.836 inet 192.168.100.8/24 scope global mlx_0_0 00:19:37.836 valid_lft forever preferred_lft forever 00:19:37.836 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:37.836 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:37.836 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:37.836 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:37.836 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:37.836 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:37.836 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:37.836 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:37.836 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:37.836 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:37.836 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:19:37.836 altname enp24s0f1np1 00:19:37.836 altname ens785f1np1 00:19:37.836 inet 192.168.100.9/24 scope global mlx_0_1 00:19:37.836 valid_lft forever preferred_lft forever 00:19:37.836 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # return 0 00:19:37.836 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:37.836 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:37.836 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:19:37.836 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:19:37.836 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:37.836 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:37.836 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:37.836 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:37.836 18:24:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:19:38.096 192.168.100.9' 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:19:38.096 192.168.100.9' 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # head -n 1 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:19:38.096 192.168.100.9' 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # tail -n +2 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # head -n 1 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=3460890 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 3460890 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 3460890 ']' 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:38.096 18:24:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:38.097 [2024-10-08 18:24:51.145727] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:19:38.097 [2024-10-08 18:24:51.145792] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:38.097 [2024-10-08 18:24:51.233141] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:38.356 [2024-10-08 18:24:51.323792] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:38.356 [2024-10-08 18:24:51.323840] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:38.356 [2024-10-08 18:24:51.323849] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:38.356 [2024-10-08 18:24:51.323858] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:38.356 [2024-10-08 18:24:51.323865] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:38.356 [2024-10-08 18:24:51.325281] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:38.356 [2024-10-08 18:24:51.325382] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:38.356 [2024-10-08 18:24:51.325485] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.356 [2024-10-08 18:24:51.325486] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:19:38.924 18:24:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:38.924 18:24:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:19:38.924 18:24:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:38.924 18:24:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:38.924 18:24:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:38.924 18:24:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:38.924 18:24:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:19:38.925 18:24:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:19:42.216 18:24:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:19:42.216 18:24:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:19:42.216 18:24:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5f:00.0 00:19:42.216 18:24:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:42.475 18:24:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:19:42.475 18:24:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5f:00.0 ']' 00:19:42.475 18:24:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:19:42.475 18:24:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:19:42.475 18:24:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:19:42.734 [2024-10-08 18:24:55.726257] rdma.c:2735:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:19:42.734 [2024-10-08 18:24:55.747769] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1dc4c50/0x1c9a7e0) succeed. 00:19:42.734 [2024-10-08 18:24:55.758569] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1dc60b0/0x1d1a4c0) succeed. 00:19:42.734 18:24:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:42.993 18:24:56 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:42.993 18:24:56 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:43.252 18:24:56 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:43.252 18:24:56 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:19:43.511 18:24:56 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:43.511 [2024-10-08 18:24:56.671804] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:43.771 18:24:56 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:19:43.771 18:24:56 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5f:00.0 ']' 00:19:43.771 18:24:56 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5f:00.0' 00:19:43.771 18:24:56 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:19:43.771 18:24:56 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5f:00.0' 00:19:45.149 Initializing NVMe Controllers 00:19:45.149 Attached to NVMe Controller at 0000:5f:00.0 [8086:0a54] 00:19:45.149 Associating PCIE (0000:5f:00.0) NSID 1 with lcore 0 00:19:45.149 Initialization complete. Launching workers. 00:19:45.149 ======================================================== 00:19:45.149 Latency(us) 00:19:45.149 Device Information : IOPS MiB/s Average min max 00:19:45.149 PCIE (0000:5f:00.0) NSID 1 from core 0: 96541.85 377.12 331.00 35.80 4718.59 00:19:45.149 ======================================================== 00:19:45.149 Total : 96541.85 377.12 331.00 35.80 4718.59 00:19:45.149 00:19:45.149 18:24:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:19:48.441 Initializing NVMe Controllers 00:19:48.441 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:19:48.441 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:48.441 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:48.441 Initialization complete. Launching workers. 00:19:48.441 ======================================================== 00:19:48.441 Latency(us) 00:19:48.441 Device Information : IOPS MiB/s Average min max 00:19:48.441 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6708.99 26.21 148.24 47.33 4088.99 00:19:48.441 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5228.99 20.43 191.02 68.08 4106.67 00:19:48.441 ======================================================== 00:19:48.441 Total : 11937.99 46.63 166.98 47.33 4106.67 00:19:48.441 00:19:48.441 18:25:01 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:19:52.638 Initializing NVMe Controllers 00:19:52.638 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:19:52.638 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:52.638 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:52.638 Initialization complete. Launching workers. 00:19:52.638 ======================================================== 00:19:52.638 Latency(us) 00:19:52.638 Device Information : IOPS MiB/s Average min max 00:19:52.638 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18054.92 70.53 1773.03 479.05 9308.75 00:19:52.638 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3996.66 15.61 8066.80 7744.49 15155.36 00:19:52.638 ======================================================== 00:19:52.638 Total : 22051.59 86.14 2913.73 479.05 15155.36 00:19:52.638 00:19:52.638 18:25:04 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:19:52.638 18:25:04 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:19:56.831 Initializing NVMe Controllers 00:19:56.831 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:19:56.831 Controller IO queue size 128, less than required. 00:19:56.831 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:56.831 Controller IO queue size 128, less than required. 00:19:56.831 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:56.831 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:56.831 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:56.832 Initialization complete. Launching workers. 00:19:56.832 ======================================================== 00:19:56.832 Latency(us) 00:19:56.832 Device Information : IOPS MiB/s Average min max 00:19:56.832 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3850.52 962.63 33301.94 15488.23 75545.87 00:19:56.832 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3968.83 992.21 32024.80 15460.37 65743.75 00:19:56.832 ======================================================== 00:19:56.832 Total : 7819.35 1954.84 32653.71 15460.37 75545.87 00:19:56.832 00:19:56.832 18:25:09 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:19:56.832 No valid NVMe controllers or AIO or URING devices found 00:19:56.832 Initializing NVMe Controllers 00:19:56.832 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:19:56.832 Controller IO queue size 128, less than required. 00:19:56.832 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:56.832 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:19:56.832 Controller IO queue size 128, less than required. 00:19:56.832 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:56.832 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:19:56.832 WARNING: Some requested NVMe devices were skipped 00:19:56.832 18:25:09 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:20:01.028 Initializing NVMe Controllers 00:20:01.028 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:20:01.028 Controller IO queue size 128, less than required. 00:20:01.028 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:01.028 Controller IO queue size 128, less than required. 00:20:01.028 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:01.028 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:01.028 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:01.028 Initialization complete. Launching workers. 00:20:01.028 00:20:01.028 ==================== 00:20:01.028 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:01.028 RDMA transport: 00:20:01.028 dev name: mlx5_0 00:20:01.028 polls: 395200 00:20:01.028 idle_polls: 391987 00:20:01.028 completions: 42098 00:20:01.028 queued_requests: 1 00:20:01.028 total_send_wrs: 21049 00:20:01.028 send_doorbell_updates: 2981 00:20:01.028 total_recv_wrs: 21176 00:20:01.028 recv_doorbell_updates: 2982 00:20:01.028 --------------------------------- 00:20:01.028 00:20:01.028 ==================== 00:20:01.028 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:01.028 RDMA transport: 00:20:01.028 dev name: mlx5_0 00:20:01.028 polls: 395562 00:20:01.028 idle_polls: 395306 00:20:01.028 completions: 19506 00:20:01.028 queued_requests: 1 00:20:01.028 total_send_wrs: 9753 00:20:01.028 send_doorbell_updates: 251 00:20:01.028 total_recv_wrs: 9880 00:20:01.028 recv_doorbell_updates: 252 00:20:01.028 --------------------------------- 00:20:01.028 ======================================================== 00:20:01.028 Latency(us) 00:20:01.028 Device Information : IOPS MiB/s Average min max 00:20:01.028 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5262.00 1315.50 24386.11 11571.53 63197.38 00:20:01.028 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2438.00 609.50 52512.42 30940.77 79584.59 00:20:01.028 ======================================================== 00:20:01.028 Total : 7700.00 1925.00 33291.55 11571.53 79584.59 00:20:01.028 00:20:01.028 18:25:14 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:20:01.028 18:25:14 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:01.288 18:25:14 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:20:01.288 18:25:14 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:01.288 18:25:14 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:20:01.289 18:25:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:01.289 18:25:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:20:01.289 18:25:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:01.289 18:25:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:01.289 18:25:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:20:01.289 18:25:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:01.289 18:25:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:01.289 rmmod nvme_rdma 00:20:01.289 rmmod nvme_fabrics 00:20:01.289 18:25:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:01.289 18:25:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:20:01.289 18:25:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:20:01.289 18:25:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 3460890 ']' 00:20:01.289 18:25:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 3460890 00:20:01.289 18:25:14 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 3460890 ']' 00:20:01.289 18:25:14 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 3460890 00:20:01.289 18:25:14 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:20:01.289 18:25:14 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:01.289 18:25:14 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3460890 00:20:01.289 18:25:14 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:01.289 18:25:14 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:01.289 18:25:14 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3460890' 00:20:01.289 killing process with pid 3460890 00:20:01.289 18:25:14 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 3460890 00:20:01.289 18:25:14 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 3460890 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:20:05.485 00:20:05.485 real 0m34.379s 00:20:05.485 user 1m50.094s 00:20:05.485 sys 0m6.954s 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:05.485 ************************************ 00:20:05.485 END TEST nvmf_perf 00:20:05.485 ************************************ 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.485 ************************************ 00:20:05.485 START TEST nvmf_fio_host 00:20:05.485 ************************************ 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:20:05.485 * Looking for test storage... 00:20:05.485 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:05.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.485 --rc genhtml_branch_coverage=1 00:20:05.485 --rc genhtml_function_coverage=1 00:20:05.485 --rc genhtml_legend=1 00:20:05.485 --rc geninfo_all_blocks=1 00:20:05.485 --rc geninfo_unexecuted_blocks=1 00:20:05.485 00:20:05.485 ' 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:05.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.485 --rc genhtml_branch_coverage=1 00:20:05.485 --rc genhtml_function_coverage=1 00:20:05.485 --rc genhtml_legend=1 00:20:05.485 --rc geninfo_all_blocks=1 00:20:05.485 --rc geninfo_unexecuted_blocks=1 00:20:05.485 00:20:05.485 ' 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:05.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.485 --rc genhtml_branch_coverage=1 00:20:05.485 --rc genhtml_function_coverage=1 00:20:05.485 --rc genhtml_legend=1 00:20:05.485 --rc geninfo_all_blocks=1 00:20:05.485 --rc geninfo_unexecuted_blocks=1 00:20:05.485 00:20:05.485 ' 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:05.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.485 --rc genhtml_branch_coverage=1 00:20:05.485 --rc genhtml_function_coverage=1 00:20:05.485 --rc genhtml_legend=1 00:20:05.485 --rc geninfo_all_blocks=1 00:20:05.485 --rc geninfo_unexecuted_blocks=1 00:20:05.485 00:20:05.485 ' 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:05.485 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:05.486 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:05.486 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:05.746 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:05.746 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:05.746 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:20:05.746 18:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:20:12.421 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:20:12.421 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:20:12.421 Found net devices under 0000:18:00.0: mlx_0_0 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:12.421 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:20:12.422 Found net devices under 0000:18:00.1: mlx_0_1 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # is_hw=yes 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # rdma_device_init 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # uname 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@528 -- # allocate_nic_ips 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:12.422 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:12.422 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:20:12.422 altname enp24s0f0np0 00:20:12.422 altname ens785f0np0 00:20:12.422 inet 192.168.100.8/24 scope global mlx_0_0 00:20:12.422 valid_lft forever preferred_lft forever 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:12.422 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:12.422 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:20:12.422 altname enp24s0f1np1 00:20:12.422 altname ens785f1np1 00:20:12.422 inet 192.168.100.9/24 scope global mlx_0_1 00:20:12.422 valid_lft forever preferred_lft forever 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # return 0 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:20:12.422 192.168.100.9' 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:20:12.422 192.168.100.9' 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # head -n 1 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:20:12.422 192.168.100.9' 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # tail -n +2 00:20:12.422 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # head -n 1 00:20:12.682 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:12.682 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:20:12.682 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:12.682 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:20:12.682 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:20:12.682 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:20:12.682 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:20:12.682 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:20:12.682 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:12.682 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.682 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3467457 00:20:12.682 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:12.682 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:12.682 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3467457 00:20:12.682 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 3467457 ']' 00:20:12.682 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.682 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:12.682 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.682 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:12.682 18:25:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.682 [2024-10-08 18:25:25.685032] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:20:12.682 [2024-10-08 18:25:25.685098] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:12.682 [2024-10-08 18:25:25.770488] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:12.942 [2024-10-08 18:25:25.862245] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:12.942 [2024-10-08 18:25:25.862285] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:12.942 [2024-10-08 18:25:25.862295] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:12.942 [2024-10-08 18:25:25.862304] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:12.942 [2024-10-08 18:25:25.862311] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:12.942 [2024-10-08 18:25:25.863773] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:20:12.942 [2024-10-08 18:25:25.863812] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:20:12.942 [2024-10-08 18:25:25.863917] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.942 [2024-10-08 18:25:25.863918] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:20:13.511 18:25:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:13.511 18:25:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:20:13.511 18:25:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:13.770 [2024-10-08 18:25:26.755380] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x12862e0/0x128a7d0) succeed. 00:20:13.770 [2024-10-08 18:25:26.765948] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1287920/0x12cbe70) succeed. 00:20:13.770 18:25:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:20:13.770 18:25:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:13.770 18:25:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.030 18:25:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:14.030 Malloc1 00:20:14.030 18:25:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:14.290 18:25:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:14.550 18:25:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:14.809 [2024-10-08 18:25:27.779120] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:14.809 18:25:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:20:15.069 18:25:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:20:15.069 18:25:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:20:15.069 18:25:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:20:15.069 18:25:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:15.069 18:25:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:15.069 18:25:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:15.069 18:25:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:20:15.070 18:25:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:20:15.070 18:25:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:15.070 18:25:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:15.070 18:25:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:20:15.070 18:25:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:20:15.070 18:25:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:15.070 18:25:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:15.070 18:25:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:15.070 18:25:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:15.070 18:25:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:20:15.070 18:25:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:15.070 18:25:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:15.070 18:25:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:15.070 18:25:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:15.070 18:25:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:15.070 18:25:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:20:15.329 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:15.329 fio-3.35 00:20:15.329 Starting 1 thread 00:20:17.862 00:20:17.862 test: (groupid=0, jobs=1): err= 0: pid=3467941: Tue Oct 8 18:25:30 2024 00:20:17.862 read: IOPS=17.5k, BW=68.4MiB/s (71.8MB/s)(137MiB/2004msec) 00:20:17.862 slat (nsec): min=1375, max=35366, avg=1531.02, stdev=469.88 00:20:17.862 clat (usec): min=2048, max=6718, avg=3624.77, stdev=124.22 00:20:17.862 lat (usec): min=2063, max=6720, avg=3626.30, stdev=124.15 00:20:17.862 clat percentiles (usec): 00:20:17.862 | 1.00th=[ 3556], 5.00th=[ 3589], 10.00th=[ 3589], 20.00th=[ 3621], 00:20:17.862 | 30.00th=[ 3621], 40.00th=[ 3621], 50.00th=[ 3621], 60.00th=[ 3621], 00:20:17.862 | 70.00th=[ 3621], 80.00th=[ 3621], 90.00th=[ 3654], 95.00th=[ 3654], 00:20:17.862 | 99.00th=[ 3916], 99.50th=[ 4293], 99.90th=[ 5473], 99.95th=[ 5735], 00:20:17.862 | 99.99th=[ 6259] 00:20:17.862 bw ( KiB/s): min=68752, max=70888, per=100.00%, avg=70086.00, stdev=935.63, samples=4 00:20:17.862 iops : min=17188, max=17722, avg=17521.50, stdev=233.91, samples=4 00:20:17.862 write: IOPS=17.5k, BW=68.5MiB/s (71.8MB/s)(137MiB/2004msec); 0 zone resets 00:20:17.862 slat (nsec): min=1412, max=18122, avg=1883.79, stdev=517.30 00:20:17.862 clat (usec): min=2061, max=6705, avg=3624.90, stdev=137.47 00:20:17.862 lat (usec): min=2072, max=6707, avg=3626.78, stdev=137.41 00:20:17.862 clat percentiles (usec): 00:20:17.862 | 1.00th=[ 3556], 5.00th=[ 3589], 10.00th=[ 3589], 20.00th=[ 3589], 00:20:17.863 | 30.00th=[ 3621], 40.00th=[ 3621], 50.00th=[ 3621], 60.00th=[ 3621], 00:20:17.863 | 70.00th=[ 3621], 80.00th=[ 3621], 90.00th=[ 3621], 95.00th=[ 3654], 00:20:17.863 | 99.00th=[ 3949], 99.50th=[ 4490], 99.90th=[ 5604], 99.95th=[ 6194], 00:20:17.863 | 99.99th=[ 6718] 00:20:17.863 bw ( KiB/s): min=68848, max=70680, per=100.00%, avg=70144.00, stdev=876.43, samples=4 00:20:17.863 iops : min=17212, max=17670, avg=17536.00, stdev=219.11, samples=4 00:20:17.863 lat (msec) : 4=99.36%, 10=0.64% 00:20:17.863 cpu : usr=99.40%, sys=0.15%, ctx=16, majf=0, minf=3 00:20:17.863 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:17.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.863 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:17.863 issued rwts: total=35110,35132,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:17.863 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:17.863 00:20:17.863 Run status group 0 (all jobs): 00:20:17.863 READ: bw=68.4MiB/s (71.8MB/s), 68.4MiB/s-68.4MiB/s (71.8MB/s-71.8MB/s), io=137MiB (144MB), run=2004-2004msec 00:20:17.863 WRITE: bw=68.5MiB/s (71.8MB/s), 68.5MiB/s-68.5MiB/s (71.8MB/s-71.8MB/s), io=137MiB (144MB), run=2004-2004msec 00:20:17.863 18:25:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:20:17.863 18:25:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:20:17.863 18:25:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:17.863 18:25:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:17.863 18:25:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:17.863 18:25:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:20:17.863 18:25:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:20:17.863 18:25:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:17.863 18:25:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:17.863 18:25:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:20:17.863 18:25:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:20:17.863 18:25:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:17.863 18:25:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:17.863 18:25:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:17.863 18:25:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:17.863 18:25:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:20:17.863 18:25:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:17.863 18:25:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:17.863 18:25:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:17.863 18:25:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:17.863 18:25:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:17.863 18:25:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:20:17.863 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:17.863 fio-3.35 00:20:17.863 Starting 1 thread 00:20:20.399 00:20:20.399 test: (groupid=0, jobs=1): err= 0: pid=3468386: Tue Oct 8 18:25:33 2024 00:20:20.399 read: IOPS=14.3k, BW=223MiB/s (234MB/s)(439MiB/1969msec) 00:20:20.399 slat (nsec): min=2281, max=43827, avg=2646.61, stdev=1134.89 00:20:20.399 clat (usec): min=474, max=9551, avg=1628.03, stdev=1341.23 00:20:20.399 lat (usec): min=477, max=9555, avg=1630.68, stdev=1341.70 00:20:20.399 clat percentiles (usec): 00:20:20.399 | 1.00th=[ 685], 5.00th=[ 783], 10.00th=[ 840], 20.00th=[ 922], 00:20:20.399 | 30.00th=[ 996], 40.00th=[ 1074], 50.00th=[ 1188], 60.00th=[ 1287], 00:20:20.399 | 70.00th=[ 1418], 80.00th=[ 1582], 90.00th=[ 4015], 95.00th=[ 5014], 00:20:20.399 | 99.00th=[ 6652], 99.50th=[ 7242], 99.90th=[ 8586], 99.95th=[ 9110], 00:20:20.399 | 99.99th=[ 9503] 00:20:20.399 bw ( KiB/s): min=109920, max=112480, per=48.80%, avg=111464.00, stdev=1170.92, samples=4 00:20:20.399 iops : min= 6870, max= 7030, avg=6966.50, stdev=73.18, samples=4 00:20:20.399 write: IOPS=8076, BW=126MiB/s (132MB/s)(227MiB/1798msec); 0 zone resets 00:20:20.399 slat (usec): min=26, max=140, avg=30.21, stdev= 5.73 00:20:20.399 clat (usec): min=4083, max=19990, avg=12874.92, stdev=1817.20 00:20:20.399 lat (usec): min=4116, max=20019, avg=12905.13, stdev=1816.47 00:20:20.399 clat percentiles (usec): 00:20:20.399 | 1.00th=[ 7177], 5.00th=[10290], 10.00th=[10814], 20.00th=[11338], 00:20:20.399 | 30.00th=[11863], 40.00th=[12387], 50.00th=[12911], 60.00th=[13304], 00:20:20.399 | 70.00th=[13829], 80.00th=[14353], 90.00th=[15008], 95.00th=[15664], 00:20:20.399 | 99.00th=[17957], 99.50th=[18482], 99.90th=[19268], 99.95th=[19792], 00:20:20.399 | 99.99th=[19792] 00:20:20.399 bw ( KiB/s): min=113664, max=116608, per=89.31%, avg=115408.00, stdev=1281.20, samples=4 00:20:20.399 iops : min= 7104, max= 7288, avg=7213.00, stdev=80.07, samples=4 00:20:20.399 lat (usec) : 500=0.01%, 750=2.11%, 1000=18.29% 00:20:20.399 lat (msec) : 2=37.07%, 4=1.86%, 10=7.85%, 20=32.81% 00:20:20.399 cpu : usr=96.51%, sys=1.90%, ctx=185, majf=0, minf=2 00:20:20.399 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:20:20.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.399 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:20.399 issued rwts: total=28108,14522,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.399 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.399 00:20:20.399 Run status group 0 (all jobs): 00:20:20.399 READ: bw=223MiB/s (234MB/s), 223MiB/s-223MiB/s (234MB/s-234MB/s), io=439MiB (461MB), run=1969-1969msec 00:20:20.399 WRITE: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=227MiB (238MB), run=1798-1798msec 00:20:20.399 18:25:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:20.659 18:25:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:20:20.659 18:25:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:20.659 18:25:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:20:20.659 18:25:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:20:20.659 18:25:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:20.659 18:25:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:20:20.659 18:25:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:20.659 18:25:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:20.659 18:25:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:20:20.659 18:25:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:20.659 18:25:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:20.659 rmmod nvme_rdma 00:20:20.659 rmmod nvme_fabrics 00:20:20.659 18:25:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:20.659 18:25:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:20:20.659 18:25:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:20:20.659 18:25:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 3467457 ']' 00:20:20.659 18:25:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 3467457 00:20:20.659 18:25:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 3467457 ']' 00:20:20.659 18:25:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 3467457 00:20:20.659 18:25:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:20:20.659 18:25:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:20.659 18:25:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3467457 00:20:20.659 18:25:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:20.659 18:25:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:20.659 18:25:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3467457' 00:20:20.659 killing process with pid 3467457 00:20:20.659 18:25:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 3467457 00:20:20.659 18:25:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 3467457 00:20:20.919 18:25:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:20.919 18:25:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:20:20.919 00:20:20.919 real 0m15.629s 00:20:20.919 user 0m44.969s 00:20:20.919 sys 0m6.501s 00:20:20.919 18:25:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:20.919 18:25:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.919 ************************************ 00:20:20.919 END TEST nvmf_fio_host 00:20:20.919 ************************************ 00:20:20.919 18:25:34 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:20:20.919 18:25:34 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:20.919 18:25:34 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:20.919 18:25:34 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.180 ************************************ 00:20:21.180 START TEST nvmf_failover 00:20:21.180 ************************************ 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:20:21.180 * Looking for test storage... 00:20:21.180 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:21.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.180 --rc genhtml_branch_coverage=1 00:20:21.180 --rc genhtml_function_coverage=1 00:20:21.180 --rc genhtml_legend=1 00:20:21.180 --rc geninfo_all_blocks=1 00:20:21.180 --rc geninfo_unexecuted_blocks=1 00:20:21.180 00:20:21.180 ' 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:21.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.180 --rc genhtml_branch_coverage=1 00:20:21.180 --rc genhtml_function_coverage=1 00:20:21.180 --rc genhtml_legend=1 00:20:21.180 --rc geninfo_all_blocks=1 00:20:21.180 --rc geninfo_unexecuted_blocks=1 00:20:21.180 00:20:21.180 ' 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:21.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.180 --rc genhtml_branch_coverage=1 00:20:21.180 --rc genhtml_function_coverage=1 00:20:21.180 --rc genhtml_legend=1 00:20:21.180 --rc geninfo_all_blocks=1 00:20:21.180 --rc geninfo_unexecuted_blocks=1 00:20:21.180 00:20:21.180 ' 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:21.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.180 --rc genhtml_branch_coverage=1 00:20:21.180 --rc genhtml_function_coverage=1 00:20:21.180 --rc genhtml_legend=1 00:20:21.180 --rc geninfo_all_blocks=1 00:20:21.180 --rc geninfo_unexecuted_blocks=1 00:20:21.180 00:20:21.180 ' 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:21.180 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:20:21.441 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:21.441 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:21.441 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:21.441 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.441 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.441 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.441 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:20:21.441 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.441 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:20:21.441 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:21.441 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:21.441 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:21.441 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:21.441 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:21.441 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:21.441 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:21.441 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:21.441 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:21.441 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:21.441 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:21.441 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:21.441 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:21.441 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:21.441 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:20:21.441 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:20:21.441 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:21.441 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:21.441 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:21.441 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:21.441 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.441 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:21.441 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.441 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:21.441 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:21.441 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:20:21.441 18:25:34 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:28.012 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:28.012 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:20:28.012 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:28.012 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:28.012 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:28.012 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:28.012 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:28.012 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:20:28.012 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:28.012 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:20:28.012 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:20:28.012 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:20:28.012 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:20:28.012 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:20:28.012 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:20:28.012 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:28.012 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:28.012 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:28.012 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:28.012 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:28.012 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:20:28.013 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:20:28.013 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:20:28.013 Found net devices under 0000:18:00.0: mlx_0_0 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:20:28.013 Found net devices under 0000:18:00.1: mlx_0_1 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # is_hw=yes 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # rdma_device_init 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # uname 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@528 -- # allocate_nic_ips 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:28.013 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:28.274 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:28.274 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:20:28.274 altname enp24s0f0np0 00:20:28.274 altname ens785f0np0 00:20:28.274 inet 192.168.100.8/24 scope global mlx_0_0 00:20:28.274 valid_lft forever preferred_lft forever 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:28.274 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:28.274 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:20:28.274 altname enp24s0f1np1 00:20:28.274 altname ens785f1np1 00:20:28.274 inet 192.168.100.9/24 scope global mlx_0_1 00:20:28.274 valid_lft forever preferred_lft forever 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # return 0 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:28.274 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:20:28.275 192.168.100.9' 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:20:28.275 192.168.100.9' 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # head -n 1 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:20:28.275 192.168.100.9' 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # tail -n +2 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # head -n 1 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=3471656 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 3471656 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3471656 ']' 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:28.275 18:25:41 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:28.275 [2024-10-08 18:25:41.410555] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:20:28.275 [2024-10-08 18:25:41.410619] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:28.535 [2024-10-08 18:25:41.496207] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:28.535 [2024-10-08 18:25:41.587299] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:28.535 [2024-10-08 18:25:41.587341] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:28.535 [2024-10-08 18:25:41.587351] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:28.535 [2024-10-08 18:25:41.587360] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:28.535 [2024-10-08 18:25:41.587367] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:28.535 [2024-10-08 18:25:41.588113] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:20:28.535 [2024-10-08 18:25:41.588215] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:20:28.535 [2024-10-08 18:25:41.588216] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:20:29.103 18:25:42 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:29.103 18:25:42 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:20:29.103 18:25:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:29.103 18:25:42 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:29.103 18:25:42 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:29.362 18:25:42 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:29.362 18:25:42 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:29.362 [2024-10-08 18:25:42.526444] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2066ab0/0x206afa0) succeed. 00:20:29.621 [2024-10-08 18:25:42.537188] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2068050/0x20ac640) succeed. 00:20:29.621 18:25:42 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:29.880 Malloc0 00:20:29.880 18:25:42 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:30.140 18:25:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:30.140 18:25:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:30.398 [2024-10-08 18:25:43.463594] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:30.398 18:25:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:20:30.657 [2024-10-08 18:25:43.672085] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:20:30.657 18:25:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:20:30.916 [2024-10-08 18:25:43.864776] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:20:30.916 18:25:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:20:30.916 18:25:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3472050 00:20:30.916 18:25:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:30.916 18:25:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3472050 /var/tmp/bdevperf.sock 00:20:30.916 18:25:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3472050 ']' 00:20:30.916 18:25:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:30.916 18:25:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:30.916 18:25:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:30.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:30.916 18:25:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:30.916 18:25:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:31.853 18:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:31.853 18:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:20:31.853 18:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:32.112 NVMe0n1 00:20:32.112 18:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:32.371 00:20:32.371 18:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3472234 00:20:32.371 18:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:20:32.371 18:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:33.309 18:25:46 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:33.569 18:25:46 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:20:36.859 18:25:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:36.859 00:20:36.859 18:25:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:20:37.118 18:25:50 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:20:40.409 18:25:53 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:40.409 [2024-10-08 18:25:53.237174] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:40.409 18:25:53 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:20:41.347 18:25:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:20:41.347 18:25:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3472234 00:20:47.925 { 00:20:47.925 "results": [ 00:20:47.925 { 00:20:47.925 "job": "NVMe0n1", 00:20:47.925 "core_mask": "0x1", 00:20:47.925 "workload": "verify", 00:20:47.925 "status": "finished", 00:20:47.925 "verify_range": { 00:20:47.925 "start": 0, 00:20:47.925 "length": 16384 00:20:47.925 }, 00:20:47.925 "queue_depth": 128, 00:20:47.925 "io_size": 4096, 00:20:47.925 "runtime": 15.005268, 00:20:47.925 "iops": 14259.058885186188, 00:20:47.925 "mibps": 55.69944877025855, 00:20:47.925 "io_failed": 3885, 00:20:47.925 "io_timeout": 0, 00:20:47.925 "avg_latency_us": 8797.45809667699, 00:20:47.925 "min_latency_us": 450.56, 00:20:47.925 "max_latency_us": 1021221.8434782609 00:20:47.925 } 00:20:47.925 ], 00:20:47.925 "core_count": 1 00:20:47.925 } 00:20:47.925 18:26:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3472050 00:20:47.925 18:26:00 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3472050 ']' 00:20:47.925 18:26:00 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3472050 00:20:47.925 18:26:00 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:20:47.925 18:26:00 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:47.925 18:26:00 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3472050 00:20:47.925 18:26:00 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:47.925 18:26:00 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:47.925 18:26:00 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3472050' 00:20:47.925 killing process with pid 3472050 00:20:47.925 18:26:00 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3472050 00:20:47.925 18:26:00 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3472050 00:20:47.925 18:26:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:47.925 [2024-10-08 18:25:43.929654] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:20:47.925 [2024-10-08 18:25:43.929719] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3472050 ] 00:20:47.925 [2024-10-08 18:25:44.015111] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.925 [2024-10-08 18:25:44.097704] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.925 Running I/O for 15 seconds... 00:20:47.925 17895.00 IOPS, 69.90 MiB/s [2024-10-08T16:26:01.098Z] 9778.50 IOPS, 38.20 MiB/s [2024-10-08T16:26:01.098Z] [2024-10-08 18:25:47.524273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x180800 00:20:47.925 [2024-10-08 18:25:47.524317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.925 [2024-10-08 18:25:47.524339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x180800 00:20:47.925 [2024-10-08 18:25:47.524350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.925 [2024-10-08 18:25:47.524362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x180800 00:20:47.925 [2024-10-08 18:25:47.524372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.925 [2024-10-08 18:25:47.524384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x180800 00:20:47.925 [2024-10-08 18:25:47.524393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.925 [2024-10-08 18:25:47.524404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x180800 00:20:47.925 [2024-10-08 18:25:47.524414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.925 [2024-10-08 18:25:47.524426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x180800 00:20:47.925 [2024-10-08 18:25:47.524435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.925 [2024-10-08 18:25:47.524447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x180800 00:20:47.925 [2024-10-08 18:25:47.524456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.925 [2024-10-08 18:25:47.524467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:25440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x180800 00:20:47.925 [2024-10-08 18:25:47.524477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.925 [2024-10-08 18:25:47.524487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x180800 00:20:47.925 [2024-10-08 18:25:47.524497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.925 [2024-10-08 18:25:47.524508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:25456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x180800 00:20:47.925 [2024-10-08 18:25:47.524525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.925 [2024-10-08 18:25:47.524537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x180800 00:20:47.925 [2024-10-08 18:25:47.524546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.925 [2024-10-08 18:25:47.524557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x180800 00:20:47.925 [2024-10-08 18:25:47.524566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.925 [2024-10-08 18:25:47.524577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:25480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x180800 00:20:47.925 [2024-10-08 18:25:47.524586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.925 [2024-10-08 18:25:47.524597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x180800 00:20:47.925 [2024-10-08 18:25:47.524606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.925 [2024-10-08 18:25:47.524618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x180800 00:20:47.925 [2024-10-08 18:25:47.524627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.925 [2024-10-08 18:25:47.524638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x180800 00:20:47.925 [2024-10-08 18:25:47.524648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.925 [2024-10-08 18:25:47.524658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x180800 00:20:47.925 [2024-10-08 18:25:47.524668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.925 [2024-10-08 18:25:47.524680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x180800 00:20:47.925 [2024-10-08 18:25:47.524689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.925 [2024-10-08 18:25:47.524700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x180800 00:20:47.925 [2024-10-08 18:25:47.524710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.925 [2024-10-08 18:25:47.524721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x180800 00:20:47.925 [2024-10-08 18:25:47.524731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.925 [2024-10-08 18:25:47.524742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x180800 00:20:47.925 [2024-10-08 18:25:47.524751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.925 [2024-10-08 18:25:47.524764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:25552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x180800 00:20:47.925 [2024-10-08 18:25:47.524774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.926 [2024-10-08 18:25:47.524785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x180800 00:20:47.926 [2024-10-08 18:25:47.524794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.926 [2024-10-08 18:25:47.524805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x180800 00:20:47.926 [2024-10-08 18:25:47.524814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.926 [2024-10-08 18:25:47.524825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:25576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x180800 00:20:47.926 [2024-10-08 18:25:47.524834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.926 [2024-10-08 18:25:47.524845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:25584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x180800 00:20:47.926 [2024-10-08 18:25:47.524854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.926 [2024-10-08 18:25:47.524865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x180800 00:20:47.926 [2024-10-08 18:25:47.524875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.926 [2024-10-08 18:25:47.524885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.926 [2024-10-08 18:25:47.524894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.926 [2024-10-08 18:25:47.524905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:25608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.926 [2024-10-08 18:25:47.524914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.926 [2024-10-08 18:25:47.524925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.926 [2024-10-08 18:25:47.524934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.926 [2024-10-08 18:25:47.524947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.926 [2024-10-08 18:25:47.524956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.926 [2024-10-08 18:25:47.524967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.926 [2024-10-08 18:25:47.524976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.926 [2024-10-08 18:25:47.524987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.926 [2024-10-08 18:25:47.524996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.926 [2024-10-08 18:25:47.525014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.926 [2024-10-08 18:25:47.525023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.926 [2024-10-08 18:25:47.525034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.926 [2024-10-08 18:25:47.525044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.926 [2024-10-08 18:25:47.525055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.926 [2024-10-08 18:25:47.525064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.926 [2024-10-08 18:25:47.525075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.926 [2024-10-08 18:25:47.525084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.926 [2024-10-08 18:25:47.525094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.926 [2024-10-08 18:25:47.525103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.926 [2024-10-08 18:25:47.525114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.926 [2024-10-08 18:25:47.525123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.926 [2024-10-08 18:25:47.525133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.926 [2024-10-08 18:25:47.525143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.926 [2024-10-08 18:25:47.525154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.926 [2024-10-08 18:25:47.525163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.926 [2024-10-08 18:25:47.525173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.926 [2024-10-08 18:25:47.525182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.926 [2024-10-08 18:25:47.525193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.926 [2024-10-08 18:25:47.525202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.926 [2024-10-08 18:25:47.525213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.926 [2024-10-08 18:25:47.525222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.926 [2024-10-08 18:25:47.525232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.926 [2024-10-08 18:25:47.525241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.926 [2024-10-08 18:25:47.525254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.926 [2024-10-08 18:25:47.525263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.926 [2024-10-08 18:25:47.525275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.926 [2024-10-08 18:25:47.525285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.926 [2024-10-08 18:25:47.525295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.926 [2024-10-08 18:25:47.525304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.926 [2024-10-08 18:25:47.525315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.926 [2024-10-08 18:25:47.525324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.926 [2024-10-08 18:25:47.525337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.926 [2024-10-08 18:25:47.525346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.926 [2024-10-08 18:25:47.525357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.926 [2024-10-08 18:25:47.525366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.926 [2024-10-08 18:25:47.525376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.926 [2024-10-08 18:25:47.525387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.926 [2024-10-08 18:25:47.525397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.926 [2024-10-08 18:25:47.525406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.926 [2024-10-08 18:25:47.525417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.926 [2024-10-08 18:25:47.525426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.926 [2024-10-08 18:25:47.525437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.926 [2024-10-08 18:25:47.525447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.926 [2024-10-08 18:25:47.525457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.926 [2024-10-08 18:25:47.525466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.927 [2024-10-08 18:25:47.525477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:25832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.927 [2024-10-08 18:25:47.525486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.927 [2024-10-08 18:25:47.525497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.927 [2024-10-08 18:25:47.525507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.927 [2024-10-08 18:25:47.525518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.927 [2024-10-08 18:25:47.525527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.927 [2024-10-08 18:25:47.525537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.927 [2024-10-08 18:25:47.525547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.927 [2024-10-08 18:25:47.525557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.927 [2024-10-08 18:25:47.525566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.927 [2024-10-08 18:25:47.525577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.927 [2024-10-08 18:25:47.525586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.927 [2024-10-08 18:25:47.525598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.927 [2024-10-08 18:25:47.525607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.927 [2024-10-08 18:25:47.525618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.927 [2024-10-08 18:25:47.525627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.927 [2024-10-08 18:25:47.525637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.927 [2024-10-08 18:25:47.525646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.927 [2024-10-08 18:25:47.525658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.927 [2024-10-08 18:25:47.525667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.927 [2024-10-08 18:25:47.525678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:25912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.927 [2024-10-08 18:25:47.525687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.927 [2024-10-08 18:25:47.525698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.927 [2024-10-08 18:25:47.525707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.927 [2024-10-08 18:25:47.525717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.927 [2024-10-08 18:25:47.525726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.927 [2024-10-08 18:25:47.525737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.927 [2024-10-08 18:25:47.525746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.927 [2024-10-08 18:25:47.525758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.927 [2024-10-08 18:25:47.525767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.927 [2024-10-08 18:25:47.525778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.927 [2024-10-08 18:25:47.525787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.927 [2024-10-08 18:25:47.525798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.927 [2024-10-08 18:25:47.525807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.927 [2024-10-08 18:25:47.525818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.927 [2024-10-08 18:25:47.525827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.927 [2024-10-08 18:25:47.525837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.927 [2024-10-08 18:25:47.525847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.927 [2024-10-08 18:25:47.525857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.927 [2024-10-08 18:25:47.525867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.927 [2024-10-08 18:25:47.525877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.927 [2024-10-08 18:25:47.525886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.927 [2024-10-08 18:25:47.525897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:26000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.927 [2024-10-08 18:25:47.525907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.927 [2024-10-08 18:25:47.525918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:26008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.927 [2024-10-08 18:25:47.525928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.927 [2024-10-08 18:25:47.525938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.927 [2024-10-08 18:25:47.525947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.927 [2024-10-08 18:25:47.525958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:26024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.927 [2024-10-08 18:25:47.525967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.927 [2024-10-08 18:25:47.525979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:26032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.927 [2024-10-08 18:25:47.525988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.927 [2024-10-08 18:25:47.526004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:26040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.927 [2024-10-08 18:25:47.526015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.927 [2024-10-08 18:25:47.526026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.927 [2024-10-08 18:25:47.526035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.927 [2024-10-08 18:25:47.526046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.927 [2024-10-08 18:25:47.526055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.927 [2024-10-08 18:25:47.526066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:26064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.927 [2024-10-08 18:25:47.526075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.927 [2024-10-08 18:25:47.526086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:26072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.927 [2024-10-08 18:25:47.526095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.927 [2024-10-08 18:25:47.526106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:26080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.927 [2024-10-08 18:25:47.526115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.927 [2024-10-08 18:25:47.526126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:26088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.927 [2024-10-08 18:25:47.526136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.927 [2024-10-08 18:25:47.526147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:26096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.927 [2024-10-08 18:25:47.526157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.927 [2024-10-08 18:25:47.526167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.927 [2024-10-08 18:25:47.526176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.927 [2024-10-08 18:25:47.526187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:26112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.927 [2024-10-08 18:25:47.526196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.927 [2024-10-08 18:25:47.526207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:26120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.928 [2024-10-08 18:25:47.526216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.928 [2024-10-08 18:25:47.526227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.928 [2024-10-08 18:25:47.526236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.928 [2024-10-08 18:25:47.526248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:26136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.928 [2024-10-08 18:25:47.526259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.928 [2024-10-08 18:25:47.526270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:26144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.928 [2024-10-08 18:25:47.526279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.928 [2024-10-08 18:25:47.526290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:26152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.928 [2024-10-08 18:25:47.526299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.928 [2024-10-08 18:25:47.526311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.928 [2024-10-08 18:25:47.526320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.928 [2024-10-08 18:25:47.526331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:26168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.928 [2024-10-08 18:25:47.526340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.928 [2024-10-08 18:25:47.526351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:26176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.928 [2024-10-08 18:25:47.526360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.928 [2024-10-08 18:25:47.526371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:26184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.928 [2024-10-08 18:25:47.526380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.928 [2024-10-08 18:25:47.526391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.928 [2024-10-08 18:25:47.526400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.928 [2024-10-08 18:25:47.526411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:26200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.928 [2024-10-08 18:25:47.526420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.928 [2024-10-08 18:25:47.526431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:26208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.928 [2024-10-08 18:25:47.526440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.928 [2024-10-08 18:25:47.526451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:26216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.928 [2024-10-08 18:25:47.526460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.928 [2024-10-08 18:25:47.526471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.928 [2024-10-08 18:25:47.526480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.928 [2024-10-08 18:25:47.526491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.928 [2024-10-08 18:25:47.526500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.928 [2024-10-08 18:25:47.526512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.928 [2024-10-08 18:25:47.526521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.928 [2024-10-08 18:25:47.526532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:26248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.928 [2024-10-08 18:25:47.526541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.928 [2024-10-08 18:25:47.526552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:26256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.928 [2024-10-08 18:25:47.526561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.928 [2024-10-08 18:25:47.526573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:26264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.928 [2024-10-08 18:25:47.526582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.928 [2024-10-08 18:25:47.526593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.928 [2024-10-08 18:25:47.526602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.928 [2024-10-08 18:25:47.526613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:26280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.928 [2024-10-08 18:25:47.526622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.928 [2024-10-08 18:25:47.526634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:26288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.928 [2024-10-08 18:25:47.526643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.928 [2024-10-08 18:25:47.526654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:26296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.928 [2024-10-08 18:25:47.526664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.928 [2024-10-08 18:25:47.526674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:26304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.928 [2024-10-08 18:25:47.526683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.928 [2024-10-08 18:25:47.526694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:26312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.928 [2024-10-08 18:25:47.526703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.928 [2024-10-08 18:25:47.526714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.928 [2024-10-08 18:25:47.526723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.928 [2024-10-08 18:25:47.526734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.928 [2024-10-08 18:25:47.526743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.928 [2024-10-08 18:25:47.526754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:26336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.928 [2024-10-08 18:25:47.526767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.928 [2024-10-08 18:25:47.526778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:26344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.928 [2024-10-08 18:25:47.526787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.928 [2024-10-08 18:25:47.526798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:26352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.928 [2024-10-08 18:25:47.526807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.928 [2024-10-08 18:25:47.526818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:26360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.928 [2024-10-08 18:25:47.526827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.928 [2024-10-08 18:25:47.526838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:26368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.928 [2024-10-08 18:25:47.526847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.928 [2024-10-08 18:25:47.526857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:26376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.928 [2024-10-08 18:25:47.526866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.928 [2024-10-08 18:25:47.526877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:26384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.928 [2024-10-08 18:25:47.526887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.928 [2024-10-08 18:25:47.526900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:26392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.928 [2024-10-08 18:25:47.526909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.928 [2024-10-08 18:25:47.528736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:47.928 [2024-10-08 18:25:47.528750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:47.928 [2024-10-08 18:25:47.528759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26400 len:8 PRP1 0x0 PRP2 0x0 00:20:47.928 [2024-10-08 18:25:47.528768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.928 [2024-10-08 18:25:47.528818] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019ae4900 was disconnected and freed. reset controller. 00:20:47.928 [2024-10-08 18:25:47.528830] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:20:47.929 [2024-10-08 18:25:47.528842] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:47.929 [2024-10-08 18:25:47.531626] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:47.929 [2024-10-08 18:25:47.546157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:47.929 [2024-10-08 18:25:47.587095] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:47.929 11589.00 IOPS, 45.27 MiB/s [2024-10-08T16:26:01.102Z] 13193.00 IOPS, 51.54 MiB/s [2024-10-08T16:26:01.102Z] 12587.60 IOPS, 49.17 MiB/s [2024-10-08T16:26:01.102Z] [2024-10-08 18:25:51.019833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x180800 00:20:47.929 [2024-10-08 18:25:51.019882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.929 [2024-10-08 18:25:51.019905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:120720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x180800 00:20:47.929 [2024-10-08 18:25:51.019915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.929 [2024-10-08 18:25:51.019927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:120728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x180800 00:20:47.929 [2024-10-08 18:25:51.019936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.929 [2024-10-08 18:25:51.019948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.929 [2024-10-08 18:25:51.019957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.929 [2024-10-08 18:25:51.019969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.929 [2024-10-08 18:25:51.019978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.929 [2024-10-08 18:25:51.019989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:121328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.929 [2024-10-08 18:25:51.020002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.929 [2024-10-08 18:25:51.020013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:121336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.929 [2024-10-08 18:25:51.020022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.929 [2024-10-08 18:25:51.020033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:121344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.929 [2024-10-08 18:25:51.020042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.929 [2024-10-08 18:25:51.020052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:121352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.929 [2024-10-08 18:25:51.020061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.929 [2024-10-08 18:25:51.020072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:121360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.929 [2024-10-08 18:25:51.020081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.929 [2024-10-08 18:25:51.020092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.929 [2024-10-08 18:25:51.020101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.929 [2024-10-08 18:25:51.020113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:120736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x180800 00:20:47.929 [2024-10-08 18:25:51.020122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.929 [2024-10-08 18:25:51.020133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x180800 00:20:47.929 [2024-10-08 18:25:51.020149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.929 [2024-10-08 18:25:51.020160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x180800 00:20:47.929 [2024-10-08 18:25:51.020169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.929 [2024-10-08 18:25:51.020181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:120760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x180800 00:20:47.929 [2024-10-08 18:25:51.020190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.929 [2024-10-08 18:25:51.020202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x180800 00:20:47.929 [2024-10-08 18:25:51.020211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.929 [2024-10-08 18:25:51.020222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:120776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x180800 00:20:47.929 [2024-10-08 18:25:51.020232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.929 [2024-10-08 18:25:51.020244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x180800 00:20:47.929 [2024-10-08 18:25:51.020254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.929 [2024-10-08 18:25:51.020265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:120792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x180800 00:20:47.929 [2024-10-08 18:25:51.020275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.929 [2024-10-08 18:25:51.020286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.929 [2024-10-08 18:25:51.020295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.929 [2024-10-08 18:25:51.020306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:121384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.929 [2024-10-08 18:25:51.020315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.929 [2024-10-08 18:25:51.020326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:121392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.929 [2024-10-08 18:25:51.020335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.929 [2024-10-08 18:25:51.020346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:121400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.929 [2024-10-08 18:25:51.020356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.929 [2024-10-08 18:25:51.020366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.929 [2024-10-08 18:25:51.020376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.929 [2024-10-08 18:25:51.020388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.929 [2024-10-08 18:25:51.020397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.929 [2024-10-08 18:25:51.020407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:121424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.929 [2024-10-08 18:25:51.020417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.930 [2024-10-08 18:25:51.020427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:121432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.930 [2024-10-08 18:25:51.020436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.930 [2024-10-08 18:25:51.020447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:121440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.930 [2024-10-08 18:25:51.020456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.930 [2024-10-08 18:25:51.020467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:121448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.930 [2024-10-08 18:25:51.020476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.930 [2024-10-08 18:25:51.020486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:121456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.930 [2024-10-08 18:25:51.020495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.930 [2024-10-08 18:25:51.020506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:121464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.930 [2024-10-08 18:25:51.020515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.930 [2024-10-08 18:25:51.020526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:121472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.930 [2024-10-08 18:25:51.020535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.930 [2024-10-08 18:25:51.020545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:121480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.930 [2024-10-08 18:25:51.020554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.930 [2024-10-08 18:25:51.020566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:121488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.930 [2024-10-08 18:25:51.020575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.930 [2024-10-08 18:25:51.020585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:121496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.930 [2024-10-08 18:25:51.020594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.930 [2024-10-08 18:25:51.020605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x180800 00:20:47.930 [2024-10-08 18:25:51.020614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.930 [2024-10-08 18:25:51.020625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:120808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x180800 00:20:47.930 [2024-10-08 18:25:51.020636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.930 [2024-10-08 18:25:51.020646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:120816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x180800 00:20:47.930 [2024-10-08 18:25:51.020656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.930 [2024-10-08 18:25:51.020666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:120824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x180800 00:20:47.930 [2024-10-08 18:25:51.020675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.930 [2024-10-08 18:25:51.020686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:120832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x180800 00:20:47.930 [2024-10-08 18:25:51.020695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.930 [2024-10-08 18:25:51.020707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:120840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x180800 00:20:47.930 [2024-10-08 18:25:51.020716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.930 [2024-10-08 18:25:51.020727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:120848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x180800 00:20:47.930 [2024-10-08 18:25:51.020736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.930 [2024-10-08 18:25:51.020747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:120856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x180800 00:20:47.930 [2024-10-08 18:25:51.020756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.930 [2024-10-08 18:25:51.020767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:120864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x180800 00:20:47.930 [2024-10-08 18:25:51.020776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.930 [2024-10-08 18:25:51.020787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x180800 00:20:47.930 [2024-10-08 18:25:51.020796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.930 [2024-10-08 18:25:51.020807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:120880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x180800 00:20:47.930 [2024-10-08 18:25:51.020816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.930 [2024-10-08 18:25:51.020827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:120888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x180800 00:20:47.930 [2024-10-08 18:25:51.020836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.930 [2024-10-08 18:25:51.020847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:120896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x180800 00:20:47.930 [2024-10-08 18:25:51.020858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.930 [2024-10-08 18:25:51.020869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:120904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x180800 00:20:47.930 [2024-10-08 18:25:51.020878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.930 [2024-10-08 18:25:51.020889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:120912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x180800 00:20:47.930 [2024-10-08 18:25:51.020898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.930 [2024-10-08 18:25:51.020909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:120920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x180800 00:20:47.930 [2024-10-08 18:25:51.020918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.930 [2024-10-08 18:25:51.020929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:120928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x180800 00:20:47.930 [2024-10-08 18:25:51.020938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.930 [2024-10-08 18:25:51.020950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:120936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x180800 00:20:47.930 [2024-10-08 18:25:51.020959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.930 [2024-10-08 18:25:51.020970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:120944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x180800 00:20:47.930 [2024-10-08 18:25:51.020979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.930 [2024-10-08 18:25:51.020990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:120952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x180800 00:20:47.930 [2024-10-08 18:25:51.021001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.930 [2024-10-08 18:25:51.021012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:120960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x180800 00:20:47.930 [2024-10-08 18:25:51.021022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.930 [2024-10-08 18:25:51.021033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:120968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x180800 00:20:47.930 [2024-10-08 18:25:51.021042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.930 [2024-10-08 18:25:51.021053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:120976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x180800 00:20:47.930 [2024-10-08 18:25:51.021062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.930 [2024-10-08 18:25:51.021073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:120984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x180800 00:20:47.930 [2024-10-08 18:25:51.021082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.930 [2024-10-08 18:25:51.021095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:120992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x180800 00:20:47.930 [2024-10-08 18:25:51.021106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.930 [2024-10-08 18:25:51.021116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:121000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x180800 00:20:47.931 [2024-10-08 18:25:51.021126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.931 [2024-10-08 18:25:51.021137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:121008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x180800 00:20:47.931 [2024-10-08 18:25:51.021147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.931 [2024-10-08 18:25:51.021158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:121016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x180800 00:20:47.931 [2024-10-08 18:25:51.021167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.931 [2024-10-08 18:25:51.021178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:121024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x180800 00:20:47.931 [2024-10-08 18:25:51.021188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.931 [2024-10-08 18:25:51.021198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:121032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x180800 00:20:47.931 [2024-10-08 18:25:51.021208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.931 [2024-10-08 18:25:51.021219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:121040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x180800 00:20:47.931 [2024-10-08 18:25:51.021229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.931 [2024-10-08 18:25:51.021240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:121048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x180800 00:20:47.931 [2024-10-08 18:25:51.021249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.931 [2024-10-08 18:25:51.021260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:121504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.931 [2024-10-08 18:25:51.021269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.931 [2024-10-08 18:25:51.021280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:121512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.931 [2024-10-08 18:25:51.021289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.931 [2024-10-08 18:25:51.021300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:121520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.931 [2024-10-08 18:25:51.021309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.931 [2024-10-08 18:25:51.021320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:121528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.931 [2024-10-08 18:25:51.021332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.931 [2024-10-08 18:25:51.021343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:121536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.931 [2024-10-08 18:25:51.021352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.931 [2024-10-08 18:25:51.021362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:121544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.931 [2024-10-08 18:25:51.021371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.931 [2024-10-08 18:25:51.021382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:121552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.931 [2024-10-08 18:25:51.021391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.931 [2024-10-08 18:25:51.021402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:121560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.931 [2024-10-08 18:25:51.021411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.931 [2024-10-08 18:25:51.021422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:121568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.931 [2024-10-08 18:25:51.021431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.931 [2024-10-08 18:25:51.021442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:121576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.931 [2024-10-08 18:25:51.021451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.931 [2024-10-08 18:25:51.021462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:121584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.931 [2024-10-08 18:25:51.021471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.931 [2024-10-08 18:25:51.021481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:121592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.931 [2024-10-08 18:25:51.021490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.931 [2024-10-08 18:25:51.021501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:121600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.931 [2024-10-08 18:25:51.021511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.931 [2024-10-08 18:25:51.021521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:121608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.931 [2024-10-08 18:25:51.021530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.931 [2024-10-08 18:25:51.021541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.931 [2024-10-08 18:25:51.021551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.931 [2024-10-08 18:25:51.021561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:121624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.931 [2024-10-08 18:25:51.021570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.931 [2024-10-08 18:25:51.021583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:121056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x180800 00:20:47.931 [2024-10-08 18:25:51.021592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.931 [2024-10-08 18:25:51.021604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:121064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x180800 00:20:47.931 [2024-10-08 18:25:51.021613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.931 [2024-10-08 18:25:51.021624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:121072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x180800 00:20:47.931 [2024-10-08 18:25:51.021633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.931 [2024-10-08 18:25:51.021644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:121080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x180800 00:20:47.931 [2024-10-08 18:25:51.021654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.931 [2024-10-08 18:25:51.021665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:121088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x180800 00:20:47.931 [2024-10-08 18:25:51.021675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.931 [2024-10-08 18:25:51.021686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:121096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x180800 00:20:47.931 [2024-10-08 18:25:51.021695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.931 [2024-10-08 18:25:51.021706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:121104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x180800 00:20:47.931 [2024-10-08 18:25:51.021716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.931 [2024-10-08 18:25:51.021727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:121112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x180800 00:20:47.931 [2024-10-08 18:25:51.021736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.931 [2024-10-08 18:25:51.021747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:121632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.931 [2024-10-08 18:25:51.021756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.931 [2024-10-08 18:25:51.021766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:121640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.931 [2024-10-08 18:25:51.021776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.931 [2024-10-08 18:25:51.021787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:121648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.931 [2024-10-08 18:25:51.021796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.931 [2024-10-08 18:25:51.021807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:121656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.931 [2024-10-08 18:25:51.021817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.931 [2024-10-08 18:25:51.021828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:121664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.931 [2024-10-08 18:25:51.021838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.931 [2024-10-08 18:25:51.021848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:121672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.932 [2024-10-08 18:25:51.021858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.932 [2024-10-08 18:25:51.021871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:121680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.932 [2024-10-08 18:25:51.021880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.932 [2024-10-08 18:25:51.021891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:121688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.932 [2024-10-08 18:25:51.021901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.932 [2024-10-08 18:25:51.021911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:121120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x180800 00:20:47.932 [2024-10-08 18:25:51.021921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.932 [2024-10-08 18:25:51.021932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x180800 00:20:47.932 [2024-10-08 18:25:51.021942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.932 [2024-10-08 18:25:51.021953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:121136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x180800 00:20:47.932 [2024-10-08 18:25:51.021962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.932 [2024-10-08 18:25:51.021973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:121144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x180800 00:20:47.932 [2024-10-08 18:25:51.021982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.932 [2024-10-08 18:25:51.021994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:121152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x180800 00:20:47.932 [2024-10-08 18:25:51.022006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.932 [2024-10-08 18:25:51.022017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:121160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x180800 00:20:47.932 [2024-10-08 18:25:51.022027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.932 [2024-10-08 18:25:51.022038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:121168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x180800 00:20:47.932 [2024-10-08 18:25:51.022047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.932 [2024-10-08 18:25:51.022060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:121176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x180800 00:20:47.932 [2024-10-08 18:25:51.022070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.932 [2024-10-08 18:25:51.022081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:121184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x180800 00:20:47.932 [2024-10-08 18:25:51.022090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.932 [2024-10-08 18:25:51.022102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:121192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x180800 00:20:47.932 [2024-10-08 18:25:51.022111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.932 [2024-10-08 18:25:51.022122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:121200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x180800 00:20:47.932 [2024-10-08 18:25:51.022131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.932 [2024-10-08 18:25:51.022142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:121208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x180800 00:20:47.932 [2024-10-08 18:25:51.022152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.932 [2024-10-08 18:25:51.022163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:121216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x180800 00:20:47.932 [2024-10-08 18:25:51.022172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.932 [2024-10-08 18:25:51.022184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:121224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x180800 00:20:47.932 [2024-10-08 18:25:51.022193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.932 [2024-10-08 18:25:51.022208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:121232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x180800 00:20:47.932 [2024-10-08 18:25:51.022217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.932 [2024-10-08 18:25:51.022228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:121240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x180800 00:20:47.932 [2024-10-08 18:25:51.022238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.932 [2024-10-08 18:25:51.022249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:121248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x180800 00:20:47.932 [2024-10-08 18:25:51.022258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.932 [2024-10-08 18:25:51.022269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:121256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x180800 00:20:47.932 [2024-10-08 18:25:51.022279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.932 [2024-10-08 18:25:51.022289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:121264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x180800 00:20:47.932 [2024-10-08 18:25:51.022301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.932 [2024-10-08 18:25:51.022312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:121272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x180800 00:20:47.932 [2024-10-08 18:25:51.022321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.932 [2024-10-08 18:25:51.022332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:121280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x180800 00:20:47.932 [2024-10-08 18:25:51.022341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.932 [2024-10-08 18:25:51.022352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:121288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x180800 00:20:47.932 [2024-10-08 18:25:51.022362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.932 [2024-10-08 18:25:51.022373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:121296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x180800 00:20:47.932 [2024-10-08 18:25:51.022382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.932 [2024-10-08 18:25:51.022393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:121304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x180800 00:20:47.932 [2024-10-08 18:25:51.022402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.932 [2024-10-08 18:25:51.022413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:121696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.932 [2024-10-08 18:25:51.022422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.932 [2024-10-08 18:25:51.022432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:121704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.932 [2024-10-08 18:25:51.022441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.932 [2024-10-08 18:25:51.022452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:121712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.932 [2024-10-08 18:25:51.022461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.932 [2024-10-08 18:25:51.022472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:121720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.932 [2024-10-08 18:25:51.022481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.932 [2024-10-08 18:25:51.024225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:47.932 [2024-10-08 18:25:51.024238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:47.932 [2024-10-08 18:25:51.024246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121728 len:8 PRP1 0x0 PRP2 0x0 00:20:47.932 [2024-10-08 18:25:51.024256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.932 [2024-10-08 18:25:51.024307] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019ae4840 was disconnected and freed. reset controller. 00:20:47.932 [2024-10-08 18:25:51.024322] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:20:47.932 [2024-10-08 18:25:51.024334] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:47.932 [2024-10-08 18:25:51.027147] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:47.932 [2024-10-08 18:25:51.041541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:47.932 [2024-10-08 18:25:51.082732] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:47.932 11582.17 IOPS, 45.24 MiB/s [2024-10-08T16:26:01.105Z] 12534.00 IOPS, 48.96 MiB/s [2024-10-08T16:26:01.105Z] 13249.38 IOPS, 51.76 MiB/s [2024-10-08T16:26:01.105Z] 13805.56 IOPS, 53.93 MiB/s [2024-10-08T16:26:01.105Z] 12451.40 IOPS, 48.64 MiB/s [2024-10-08T16:26:01.106Z] [2024-10-08 18:25:55.473251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.933 [2024-10-08 18:25:55.473286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.933 [2024-10-08 18:25:55.473304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.933 [2024-10-08 18:25:55.473315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.933 [2024-10-08 18:25:55.473327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.933 [2024-10-08 18:25:55.473336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.933 [2024-10-08 18:25:55.473347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.933 [2024-10-08 18:25:55.473356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.933 [2024-10-08 18:25:55.473368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x180800 00:20:47.933 [2024-10-08 18:25:55.473378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.933 [2024-10-08 18:25:55.473390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x180800 00:20:47.933 [2024-10-08 18:25:55.473399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.933 [2024-10-08 18:25:55.473410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x180800 00:20:47.933 [2024-10-08 18:25:55.473419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.933 [2024-10-08 18:25:55.473430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x180800 00:20:47.933 [2024-10-08 18:25:55.473440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.933 [2024-10-08 18:25:55.473451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x180800 00:20:47.933 [2024-10-08 18:25:55.473461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.933 [2024-10-08 18:25:55.473472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x180800 00:20:47.933 [2024-10-08 18:25:55.473492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.933 [2024-10-08 18:25:55.473503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x180800 00:20:47.933 [2024-10-08 18:25:55.473513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.933 [2024-10-08 18:25:55.473524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x180800 00:20:47.933 [2024-10-08 18:25:55.473533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.933 [2024-10-08 18:25:55.473544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x180800 00:20:47.933 [2024-10-08 18:25:55.473554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.933 [2024-10-08 18:25:55.473565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x180800 00:20:47.933 [2024-10-08 18:25:55.473574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.933 [2024-10-08 18:25:55.473585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x180800 00:20:47.933 [2024-10-08 18:25:55.473595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.933 [2024-10-08 18:25:55.473605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x180800 00:20:47.933 [2024-10-08 18:25:55.473615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.933 [2024-10-08 18:25:55.473626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x180800 00:20:47.933 [2024-10-08 18:25:55.473635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.933 [2024-10-08 18:25:55.473647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x180800 00:20:47.933 [2024-10-08 18:25:55.473657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.933 [2024-10-08 18:25:55.473668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x180800 00:20:47.933 [2024-10-08 18:25:55.473677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.933 [2024-10-08 18:25:55.473688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x180800 00:20:47.933 [2024-10-08 18:25:55.473697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.933 [2024-10-08 18:25:55.473708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x180800 00:20:47.933 [2024-10-08 18:25:55.473718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.933 [2024-10-08 18:25:55.473731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x180800 00:20:47.933 [2024-10-08 18:25:55.473741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.933 [2024-10-08 18:25:55.473752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x180800 00:20:47.933 [2024-10-08 18:25:55.473761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.933 [2024-10-08 18:25:55.473772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x180800 00:20:47.933 [2024-10-08 18:25:55.473781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.933 [2024-10-08 18:25:55.473792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x180800 00:20:47.933 [2024-10-08 18:25:55.473801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.933 [2024-10-08 18:25:55.473812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x180800 00:20:47.933 [2024-10-08 18:25:55.473822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.933 [2024-10-08 18:25:55.473833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x180800 00:20:47.933 [2024-10-08 18:25:55.473842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.933 [2024-10-08 18:25:55.473853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x180800 00:20:47.933 [2024-10-08 18:25:55.473862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.933 [2024-10-08 18:25:55.473873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x180800 00:20:47.933 [2024-10-08 18:25:55.473882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.933 [2024-10-08 18:25:55.473893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x180800 00:20:47.933 [2024-10-08 18:25:55.473902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.933 [2024-10-08 18:25:55.473914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x180800 00:20:47.933 [2024-10-08 18:25:55.473923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.933 [2024-10-08 18:25:55.473934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x180800 00:20:47.933 [2024-10-08 18:25:55.473943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.933 [2024-10-08 18:25:55.473954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x180800 00:20:47.933 [2024-10-08 18:25:55.473965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.933 [2024-10-08 18:25:55.473976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x180800 00:20:47.934 [2024-10-08 18:25:55.473985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.934 [2024-10-08 18:25:55.473996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x180800 00:20:47.934 [2024-10-08 18:25:55.474009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.934 [2024-10-08 18:25:55.474020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x180800 00:20:47.934 [2024-10-08 18:25:55.474029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.934 [2024-10-08 18:25:55.474040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.934 [2024-10-08 18:25:55.474049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.934 [2024-10-08 18:25:55.474060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.934 [2024-10-08 18:25:55.474069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.934 [2024-10-08 18:25:55.474080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.934 [2024-10-08 18:25:55.474089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.934 [2024-10-08 18:25:55.474099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.934 [2024-10-08 18:25:55.474108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.934 [2024-10-08 18:25:55.474119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.934 [2024-10-08 18:25:55.474128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.934 [2024-10-08 18:25:55.474139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.934 [2024-10-08 18:25:55.474148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.934 [2024-10-08 18:25:55.474158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.934 [2024-10-08 18:25:55.474167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.934 [2024-10-08 18:25:55.474178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.934 [2024-10-08 18:25:55.474187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.934 [2024-10-08 18:25:55.474198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x180800 00:20:47.934 [2024-10-08 18:25:55.474212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.934 [2024-10-08 18:25:55.474224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x180800 00:20:47.934 [2024-10-08 18:25:55.474233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.934 [2024-10-08 18:25:55.474244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x180800 00:20:47.934 [2024-10-08 18:25:55.474253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.934 [2024-10-08 18:25:55.474264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x180800 00:20:47.934 [2024-10-08 18:25:55.474273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.934 [2024-10-08 18:25:55.474284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x180800 00:20:47.934 [2024-10-08 18:25:55.474293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.934 [2024-10-08 18:25:55.474305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x180800 00:20:47.934 [2024-10-08 18:25:55.474314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.934 [2024-10-08 18:25:55.474325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x180800 00:20:47.934 [2024-10-08 18:25:55.474334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.934 [2024-10-08 18:25:55.474345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x180800 00:20:47.934 [2024-10-08 18:25:55.474354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.934 [2024-10-08 18:25:55.474365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.934 [2024-10-08 18:25:55.474374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.934 [2024-10-08 18:25:55.474385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.934 [2024-10-08 18:25:55.474395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.934 [2024-10-08 18:25:55.474405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.934 [2024-10-08 18:25:55.474414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.934 [2024-10-08 18:25:55.474425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.934 [2024-10-08 18:25:55.474434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.934 [2024-10-08 18:25:55.474447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.934 [2024-10-08 18:25:55.474456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.934 [2024-10-08 18:25:55.474467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x180800 00:20:47.934 [2024-10-08 18:25:55.474476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.934 [2024-10-08 18:25:55.474487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x180800 00:20:47.934 [2024-10-08 18:25:55.474497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.934 [2024-10-08 18:25:55.474507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x180800 00:20:47.934 [2024-10-08 18:25:55.474516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.934 [2024-10-08 18:25:55.474527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x180800 00:20:47.934 [2024-10-08 18:25:55.474536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.934 [2024-10-08 18:25:55.474547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x180800 00:20:47.934 [2024-10-08 18:25:55.474556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.934 [2024-10-08 18:25:55.474567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x180800 00:20:47.934 [2024-10-08 18:25:55.474576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.934 [2024-10-08 18:25:55.474587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x180800 00:20:47.935 [2024-10-08 18:25:55.474596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.935 [2024-10-08 18:25:55.474608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.935 [2024-10-08 18:25:55.474617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.935 [2024-10-08 18:25:55.474627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.935 [2024-10-08 18:25:55.474637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.935 [2024-10-08 18:25:55.474647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.935 [2024-10-08 18:25:55.474657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.935 [2024-10-08 18:25:55.474667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.935 [2024-10-08 18:25:55.474678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.935 [2024-10-08 18:25:55.474688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.935 [2024-10-08 18:25:55.474697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.935 [2024-10-08 18:25:55.474708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.935 [2024-10-08 18:25:55.474717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.935 [2024-10-08 18:25:55.474728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.935 [2024-10-08 18:25:55.474737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.935 [2024-10-08 18:25:55.474747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.935 [2024-10-08 18:25:55.474756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.935 [2024-10-08 18:25:55.474767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.935 [2024-10-08 18:25:55.474776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.935 [2024-10-08 18:25:55.474787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.935 [2024-10-08 18:25:55.474796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.935 [2024-10-08 18:25:55.474806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.935 [2024-10-08 18:25:55.474816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.935 [2024-10-08 18:25:55.474826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.935 [2024-10-08 18:25:55.474836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.935 [2024-10-08 18:25:55.474846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.935 [2024-10-08 18:25:55.474855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.935 [2024-10-08 18:25:55.474866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.935 [2024-10-08 18:25:55.474875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.935 [2024-10-08 18:25:55.474885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.935 [2024-10-08 18:25:55.474894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.935 [2024-10-08 18:25:55.474905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.935 [2024-10-08 18:25:55.474914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.935 [2024-10-08 18:25:55.474926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.935 [2024-10-08 18:25:55.474936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.935 [2024-10-08 18:25:55.474946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.935 [2024-10-08 18:25:55.474955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.935 [2024-10-08 18:25:55.474966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.935 [2024-10-08 18:25:55.474975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.935 [2024-10-08 18:25:55.474986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.935 [2024-10-08 18:25:55.474995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.935 [2024-10-08 18:25:55.475009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.935 [2024-10-08 18:25:55.475018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.935 [2024-10-08 18:25:55.475029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.935 [2024-10-08 18:25:55.475038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.935 [2024-10-08 18:25:55.475048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.935 [2024-10-08 18:25:55.475057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.935 [2024-10-08 18:25:55.475068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.935 [2024-10-08 18:25:55.475077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.935 [2024-10-08 18:25:55.475088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x180800 00:20:47.935 [2024-10-08 18:25:55.475098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.935 [2024-10-08 18:25:55.475109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:99648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x180800 00:20:47.935 [2024-10-08 18:25:55.475118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.935 [2024-10-08 18:25:55.475129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x180800 00:20:47.935 [2024-10-08 18:25:55.475138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.935 [2024-10-08 18:25:55.475149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x180800 00:20:47.935 [2024-10-08 18:25:55.475158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.935 [2024-10-08 18:25:55.475171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x180800 00:20:47.935 [2024-10-08 18:25:55.475180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.935 [2024-10-08 18:25:55.475191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x180800 00:20:47.935 [2024-10-08 18:25:55.475200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.935 [2024-10-08 18:25:55.475211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x180800 00:20:47.935 [2024-10-08 18:25:55.475220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.935 [2024-10-08 18:25:55.475231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x180800 00:20:47.935 [2024-10-08 18:25:55.475240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.935 [2024-10-08 18:25:55.475251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.935 [2024-10-08 18:25:55.475260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.935 [2024-10-08 18:25:55.475273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.935 [2024-10-08 18:25:55.475282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.935 [2024-10-08 18:25:55.475293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.935 [2024-10-08 18:25:55.475302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.935 [2024-10-08 18:25:55.475313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.935 [2024-10-08 18:25:55.475321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.935 [2024-10-08 18:25:55.475332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.936 [2024-10-08 18:25:55.475341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.936 [2024-10-08 18:25:55.475352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.936 [2024-10-08 18:25:55.475361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.936 [2024-10-08 18:25:55.475371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.936 [2024-10-08 18:25:55.475380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.936 [2024-10-08 18:25:55.475391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.936 [2024-10-08 18:25:55.475400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.936 [2024-10-08 18:25:55.475413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x180800 00:20:47.936 [2024-10-08 18:25:55.475422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.936 [2024-10-08 18:25:55.475433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x180800 00:20:47.936 [2024-10-08 18:25:55.475442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.936 [2024-10-08 18:25:55.475453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x180800 00:20:47.936 [2024-10-08 18:25:55.475462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.936 [2024-10-08 18:25:55.475473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x180800 00:20:47.936 [2024-10-08 18:25:55.475482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.936 [2024-10-08 18:25:55.475492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x180800 00:20:47.936 [2024-10-08 18:25:55.475501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.936 [2024-10-08 18:25:55.475512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x180800 00:20:47.936 [2024-10-08 18:25:55.475522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.936 [2024-10-08 18:25:55.475532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:99752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x180800 00:20:47.936 [2024-10-08 18:25:55.475541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.936 [2024-10-08 18:25:55.475552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x180800 00:20:47.936 [2024-10-08 18:25:55.475562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.936 [2024-10-08 18:25:55.475573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.936 [2024-10-08 18:25:55.475583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.936 [2024-10-08 18:25:55.475595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.936 [2024-10-08 18:25:55.475604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.936 [2024-10-08 18:25:55.475614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.936 [2024-10-08 18:25:55.475624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.936 [2024-10-08 18:25:55.475634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.936 [2024-10-08 18:25:55.475645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.936 [2024-10-08 18:25:55.475656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.936 [2024-10-08 18:25:55.475664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.936 [2024-10-08 18:25:55.475675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.936 [2024-10-08 18:25:55.475684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.936 [2024-10-08 18:25:55.475695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.936 [2024-10-08 18:25:55.475704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.936 [2024-10-08 18:25:55.475714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.936 [2024-10-08 18:25:55.475723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.936 [2024-10-08 18:25:55.475734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x180800 00:20:47.936 [2024-10-08 18:25:55.475743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.936 [2024-10-08 18:25:55.475754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x180800 00:20:47.936 [2024-10-08 18:25:55.475764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.936 [2024-10-08 18:25:55.475775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x180800 00:20:47.936 [2024-10-08 18:25:55.475784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.936 [2024-10-08 18:25:55.475795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x180800 00:20:47.936 [2024-10-08 18:25:55.475804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.936 [2024-10-08 18:25:55.475815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x180800 00:20:47.936 [2024-10-08 18:25:55.475824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.936 [2024-10-08 18:25:55.475835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x180800 00:20:47.936 [2024-10-08 18:25:55.475844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.936 [2024-10-08 18:25:55.475855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x180800 00:20:47.936 [2024-10-08 18:25:55.475864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d038e000 sqhd:7250 p:0 m:0 dnr:0 00:20:47.936 [2024-10-08 18:25:55.477597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:47.936 [2024-10-08 18:25:55.477613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:47.936 [2024-10-08 18:25:55.477622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99824 len:8 PRP1 0x0 PRP2 0x0 00:20:47.936 [2024-10-08 18:25:55.477631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.936 [2024-10-08 18:25:55.477675] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019ae4840 was disconnected and freed. reset controller. 00:20:47.936 [2024-10-08 18:25:55.477687] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:20:47.936 [2024-10-08 18:25:55.477698] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:47.936 [2024-10-08 18:25:55.480492] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:47.936 [2024-10-08 18:25:55.494524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:47.936 [2024-10-08 18:25:55.535182] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:47.936 12837.91 IOPS, 50.15 MiB/s [2024-10-08T16:26:01.109Z] 13284.33 IOPS, 51.89 MiB/s [2024-10-08T16:26:01.109Z] 13662.23 IOPS, 53.37 MiB/s [2024-10-08T16:26:01.109Z] 13984.64 IOPS, 54.63 MiB/s [2024-10-08T16:26:01.109Z] 14259.87 IOPS, 55.70 MiB/s 00:20:47.936 Latency(us) 00:20:47.936 [2024-10-08T16:26:01.109Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.936 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:47.936 Verification LBA range: start 0x0 length 0x4000 00:20:47.936 NVMe0n1 : 15.01 14259.06 55.70 258.91 0.00 8797.46 450.56 1021221.84 00:20:47.936 [2024-10-08T16:26:01.109Z] =================================================================================================================== 00:20:47.936 [2024-10-08T16:26:01.110Z] Total : 14259.06 55.70 258.91 0.00 8797.46 450.56 1021221.84 00:20:47.937 Received shutdown signal, test time was about 15.000000 seconds 00:20:47.937 00:20:47.937 Latency(us) 00:20:47.937 [2024-10-08T16:26:01.110Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.937 [2024-10-08T16:26:01.110Z] =================================================================================================================== 00:20:47.937 [2024-10-08T16:26:01.110Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:47.937 18:26:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:20:47.937 18:26:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:20:47.937 18:26:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:20:47.937 18:26:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3474283 00:20:47.937 18:26:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:20:47.937 18:26:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3474283 /var/tmp/bdevperf.sock 00:20:47.937 18:26:00 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3474283 ']' 00:20:47.937 18:26:00 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:47.937 18:26:00 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:47.937 18:26:00 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:47.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:47.937 18:26:00 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:47.937 18:26:00 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:48.876 18:26:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:48.876 18:26:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:20:48.876 18:26:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:20:48.876 [2024-10-08 18:26:01.866482] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:20:48.876 18:26:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:20:49.135 [2024-10-08 18:26:02.067138] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:20:49.135 18:26:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:49.395 NVMe0n1 00:20:49.395 18:26:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:49.654 00:20:49.654 18:26:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:49.913 00:20:49.913 18:26:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:49.913 18:26:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:20:50.268 18:26:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:50.268 18:26:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:20:53.559 18:26:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:53.559 18:26:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:20:53.559 18:26:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3475024 00:20:53.559 18:26:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:53.559 18:26:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3475024 00:20:54.500 { 00:20:54.500 "results": [ 00:20:54.500 { 00:20:54.500 "job": "NVMe0n1", 00:20:54.500 "core_mask": "0x1", 00:20:54.500 "workload": "verify", 00:20:54.500 "status": "finished", 00:20:54.500 "verify_range": { 00:20:54.500 "start": 0, 00:20:54.500 "length": 16384 00:20:54.500 }, 00:20:54.500 "queue_depth": 128, 00:20:54.500 "io_size": 4096, 00:20:54.500 "runtime": 1.005845, 00:20:54.500 "iops": 17947.09920514592, 00:20:54.500 "mibps": 70.10585627010126, 00:20:54.500 "io_failed": 0, 00:20:54.500 "io_timeout": 0, 00:20:54.500 "avg_latency_us": 7093.514380678041, 00:20:54.500 "min_latency_us": 218.15652173913043, 00:20:54.500 "max_latency_us": 14816.834782608696 00:20:54.500 } 00:20:54.500 ], 00:20:54.500 "core_count": 1 00:20:54.500 } 00:20:54.500 18:26:07 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:54.500 [2024-10-08 18:26:00.819846] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:20:54.500 [2024-10-08 18:26:00.819911] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3474283 ] 00:20:54.500 [2024-10-08 18:26:00.907068] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.500 [2024-10-08 18:26:00.995946] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.500 [2024-10-08 18:26:03.276223] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:20:54.500 [2024-10-08 18:26:03.276732] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:54.500 [2024-10-08 18:26:03.276767] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:54.500 [2024-10-08 18:26:03.293830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:54.500 [2024-10-08 18:26:03.310120] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:54.500 Running I/O for 1 seconds... 00:20:54.500 17920.00 IOPS, 70.00 MiB/s 00:20:54.500 Latency(us) 00:20:54.500 [2024-10-08T16:26:07.673Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.500 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:54.500 Verification LBA range: start 0x0 length 0x4000 00:20:54.500 NVMe0n1 : 1.01 17947.10 70.11 0.00 0.00 7093.51 218.16 14816.83 00:20:54.500 [2024-10-08T16:26:07.673Z] =================================================================================================================== 00:20:54.500 [2024-10-08T16:26:07.673Z] Total : 17947.10 70.11 0.00 0.00 7093.51 218.16 14816.83 00:20:54.500 18:26:07 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:54.500 18:26:07 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:20:54.818 18:26:07 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:55.111 18:26:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:55.111 18:26:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:20:55.111 18:26:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:55.372 18:26:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:20:58.666 18:26:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:58.666 18:26:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:20:58.667 18:26:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3474283 00:20:58.667 18:26:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3474283 ']' 00:20:58.667 18:26:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3474283 00:20:58.667 18:26:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:20:58.667 18:26:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:58.667 18:26:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3474283 00:20:58.667 18:26:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:58.667 18:26:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:58.667 18:26:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3474283' 00:20:58.667 killing process with pid 3474283 00:20:58.667 18:26:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3474283 00:20:58.667 18:26:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3474283 00:20:58.927 18:26:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:20:58.927 18:26:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:59.187 18:26:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:20:59.187 18:26:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:59.187 18:26:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:20:59.187 18:26:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:59.187 18:26:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:20:59.187 18:26:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:59.187 18:26:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:59.187 18:26:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:20:59.187 18:26:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:59.187 18:26:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:59.187 rmmod nvme_rdma 00:20:59.187 rmmod nvme_fabrics 00:20:59.187 18:26:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:59.187 18:26:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:20:59.187 18:26:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:20:59.187 18:26:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 3471656 ']' 00:20:59.187 18:26:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 3471656 00:20:59.187 18:26:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3471656 ']' 00:20:59.187 18:26:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3471656 00:20:59.187 18:26:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:20:59.187 18:26:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:59.187 18:26:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3471656 00:20:59.187 18:26:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:59.187 18:26:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:59.187 18:26:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3471656' 00:20:59.187 killing process with pid 3471656 00:20:59.187 18:26:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3471656 00:20:59.187 18:26:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3471656 00:20:59.447 18:26:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:59.447 18:26:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:20:59.447 00:20:59.447 real 0m38.440s 00:20:59.447 user 2m7.144s 00:20:59.447 sys 0m7.965s 00:20:59.447 18:26:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:59.447 18:26:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:59.447 ************************************ 00:20:59.447 END TEST nvmf_failover 00:20:59.447 ************************************ 00:20:59.447 18:26:12 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:20:59.447 18:26:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:59.447 18:26:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:59.447 18:26:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.708 ************************************ 00:20:59.708 START TEST nvmf_host_discovery 00:20:59.708 ************************************ 00:20:59.708 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:20:59.708 * Looking for test storage... 00:20:59.708 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:59.708 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:59.708 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:20:59.708 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:59.708 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:59.708 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:59.708 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:59.708 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:59.708 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:20:59.708 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:20:59.708 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:20:59.708 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:20:59.708 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:20:59.708 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:20:59.708 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:20:59.708 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:59.708 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:20:59.708 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:20:59.708 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:59.708 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:59.708 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:20:59.708 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:20:59.708 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:59.708 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:20:59.708 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:20:59.708 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:20:59.708 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:20:59.708 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:59.708 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:20:59.708 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:20:59.708 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:59.708 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:59.708 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:20:59.708 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:59.708 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:59.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.708 --rc genhtml_branch_coverage=1 00:20:59.708 --rc genhtml_function_coverage=1 00:20:59.708 --rc genhtml_legend=1 00:20:59.708 --rc geninfo_all_blocks=1 00:20:59.708 --rc geninfo_unexecuted_blocks=1 00:20:59.708 00:20:59.708 ' 00:20:59.708 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:59.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.708 --rc genhtml_branch_coverage=1 00:20:59.709 --rc genhtml_function_coverage=1 00:20:59.709 --rc genhtml_legend=1 00:20:59.709 --rc geninfo_all_blocks=1 00:20:59.709 --rc geninfo_unexecuted_blocks=1 00:20:59.709 00:20:59.709 ' 00:20:59.709 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:59.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.709 --rc genhtml_branch_coverage=1 00:20:59.709 --rc genhtml_function_coverage=1 00:20:59.709 --rc genhtml_legend=1 00:20:59.709 --rc geninfo_all_blocks=1 00:20:59.709 --rc geninfo_unexecuted_blocks=1 00:20:59.709 00:20:59.709 ' 00:20:59.709 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:59.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.709 --rc genhtml_branch_coverage=1 00:20:59.709 --rc genhtml_function_coverage=1 00:20:59.709 --rc genhtml_legend=1 00:20:59.709 --rc geninfo_all_blocks=1 00:20:59.709 --rc geninfo_unexecuted_blocks=1 00:20:59.709 00:20:59.709 ' 00:20:59.709 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:59.709 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:20:59.709 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:59.709 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:59.709 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:59.709 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:59.709 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:59.709 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:59.709 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:59.709 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:59.709 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:59.709 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:59.709 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:20:59.709 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:20:59.709 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:59.709 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:59.709 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:59.709 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:59.709 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:59.709 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:20:59.709 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:59.709 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:59.709 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:59.709 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.709 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.709 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.709 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:20:59.709 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.709 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:20:59.709 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:59.709 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:59.709 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:59.709 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:59.709 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:59.709 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:59.709 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:59.709 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:59.709 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:59.709 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:59.970 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:20:59.970 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:20:59.970 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:20:59.970 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:20:59.970 00:20:59.970 real 0m0.229s 00:20:59.970 user 0m0.127s 00:20:59.970 sys 0m0.120s 00:20:59.970 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:59.970 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:59.970 ************************************ 00:20:59.970 END TEST nvmf_host_discovery 00:20:59.970 ************************************ 00:20:59.970 18:26:12 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:20:59.970 18:26:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:59.970 18:26:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:59.970 18:26:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.970 ************************************ 00:20:59.970 START TEST nvmf_host_multipath_status 00:20:59.970 ************************************ 00:20:59.970 18:26:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:20:59.970 * Looking for test storage... 00:20:59.970 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:59.970 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:59.970 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:20:59.970 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:59.971 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:59.971 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:59.971 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:59.971 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:59.971 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:20:59.971 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:20:59.971 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:20:59.971 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:20:59.971 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:20:59.971 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:20:59.971 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:20:59.971 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:59.971 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:20:59.971 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:20:59.971 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:59.971 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:59.971 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:20:59.971 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:20:59.971 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:59.971 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:21:00.231 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:21:00.231 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:21:00.231 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:21:00.231 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:00.231 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:21:00.231 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:21:00.231 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:00.231 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:00.231 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:21:00.231 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:00.231 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:00.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.231 --rc genhtml_branch_coverage=1 00:21:00.231 --rc genhtml_function_coverage=1 00:21:00.231 --rc genhtml_legend=1 00:21:00.231 --rc geninfo_all_blocks=1 00:21:00.231 --rc geninfo_unexecuted_blocks=1 00:21:00.231 00:21:00.231 ' 00:21:00.231 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:00.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.231 --rc genhtml_branch_coverage=1 00:21:00.231 --rc genhtml_function_coverage=1 00:21:00.231 --rc genhtml_legend=1 00:21:00.231 --rc geninfo_all_blocks=1 00:21:00.232 --rc geninfo_unexecuted_blocks=1 00:21:00.232 00:21:00.232 ' 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:00.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.232 --rc genhtml_branch_coverage=1 00:21:00.232 --rc genhtml_function_coverage=1 00:21:00.232 --rc genhtml_legend=1 00:21:00.232 --rc geninfo_all_blocks=1 00:21:00.232 --rc geninfo_unexecuted_blocks=1 00:21:00.232 00:21:00.232 ' 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:00.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.232 --rc genhtml_branch_coverage=1 00:21:00.232 --rc genhtml_function_coverage=1 00:21:00.232 --rc genhtml_legend=1 00:21:00.232 --rc geninfo_all_blocks=1 00:21:00.232 --rc geninfo_unexecuted_blocks=1 00:21:00.232 00:21:00.232 ' 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:00.232 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.232 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:00.233 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.233 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:00.233 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:00.233 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:21:00.233 18:26:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:06.808 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:06.808 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:21:06.808 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:06.808 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:06.808 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:06.808 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:06.808 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:06.808 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:21:06.808 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:06.808 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:21:06.808 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:21:06.808 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:21:06.808 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:21:06.808 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:21:06.808 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:21:06.808 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:06.808 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:06.808 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:06.808 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:06.808 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:06.808 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:06.808 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:06.808 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:06.808 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:06.808 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:06.808 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:06.808 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:06.808 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:06.808 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:21:06.808 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:21:06.808 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:21:06.808 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:21:06.808 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:21:06.808 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:06.808 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:06.808 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:21:06.808 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:21:06.809 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:21:06.809 Found net devices under 0000:18:00.0: mlx_0_0 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:21:06.809 Found net devices under 0000:18:00.1: mlx_0_1 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # is_hw=yes 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # rdma_device_init 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # uname 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe ib_cm 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe ib_core 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe ib_umad 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@70 -- # modprobe iw_cm 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@528 -- # allocate_nic_ips 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # get_rdma_if_list 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:21:06.809 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:06.809 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:21:06.809 altname enp24s0f0np0 00:21:06.809 altname ens785f0np0 00:21:06.809 inet 192.168.100.8/24 scope global mlx_0_0 00:21:06.809 valid_lft forever preferred_lft forever 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:21:06.809 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:06.809 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:21:06.809 altname enp24s0f1np1 00:21:06.809 altname ens785f1np1 00:21:06.809 inet 192.168.100.9/24 scope global mlx_0_1 00:21:06.809 valid_lft forever preferred_lft forever 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # return 0 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # get_rdma_if_list 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:06.809 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:21:06.810 192.168.100.9' 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:21:06.810 192.168.100.9' 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # head -n 1 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:21:06.810 192.168.100.9' 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # tail -n +2 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # head -n 1 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=3478906 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 3478906 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 3478906 ']' 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:06.810 18:26:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:07.069 [2024-10-08 18:26:19.991928] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:21:07.069 [2024-10-08 18:26:19.991992] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:07.069 [2024-10-08 18:26:20.080462] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:07.069 [2024-10-08 18:26:20.171156] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:07.069 [2024-10-08 18:26:20.171202] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:07.069 [2024-10-08 18:26:20.171211] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:07.069 [2024-10-08 18:26:20.171236] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:07.069 [2024-10-08 18:26:20.171244] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:07.069 [2024-10-08 18:26:20.171899] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.069 [2024-10-08 18:26:20.171899] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.007 18:26:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:08.007 18:26:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:21:08.007 18:26:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:08.007 18:26:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:08.007 18:26:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:08.007 18:26:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:08.007 18:26:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3478906 00:21:08.007 18:26:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:08.007 [2024-10-08 18:26:21.090602] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2142c20/0x2147110) succeed. 00:21:08.007 [2024-10-08 18:26:21.100252] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2144120/0x21887b0) succeed. 00:21:08.267 18:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:08.267 Malloc0 00:21:08.267 18:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:08.526 18:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:08.785 18:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:09.045 [2024-10-08 18:26:21.998497] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:09.045 18:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:21:09.045 [2024-10-08 18:26:22.194915] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:21:09.304 18:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:09.304 18:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3479120 00:21:09.304 18:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:09.304 18:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3479120 /var/tmp/bdevperf.sock 00:21:09.304 18:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 3479120 ']' 00:21:09.304 18:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:09.304 18:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:09.304 18:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:09.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:09.304 18:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:09.304 18:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:10.242 18:26:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:10.242 18:26:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:21:10.242 18:26:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:10.242 18:26:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:10.501 Nvme0n1 00:21:10.501 18:26:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:10.760 Nvme0n1 00:21:10.760 18:26:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:21:10.760 18:26:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:13.297 18:26:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:21:13.297 18:26:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:21:13.297 18:26:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:21:13.297 18:26:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:21:14.235 18:26:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:21:14.235 18:26:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:14.235 18:26:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:14.235 18:26:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:14.494 18:26:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:14.494 18:26:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:14.494 18:26:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:14.494 18:26:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:14.754 18:26:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:14.754 18:26:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:14.754 18:26:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:14.754 18:26:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:14.754 18:26:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:14.754 18:26:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:14.754 18:26:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:15.014 18:26:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:15.014 18:26:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:15.014 18:26:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:15.014 18:26:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:15.014 18:26:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:15.273 18:26:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:15.273 18:26:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:15.273 18:26:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:15.273 18:26:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:15.532 18:26:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:15.532 18:26:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:21:15.532 18:26:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:21:15.792 18:26:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:21:15.792 18:26:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:21:17.172 18:26:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:21:17.172 18:26:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:17.172 18:26:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:17.172 18:26:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:17.172 18:26:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:17.172 18:26:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:17.172 18:26:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:17.172 18:26:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:17.432 18:26:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:17.432 18:26:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:17.432 18:26:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:17.432 18:26:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:17.432 18:26:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:17.432 18:26:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:17.432 18:26:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:17.432 18:26:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:17.691 18:26:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:17.691 18:26:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:17.691 18:26:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:17.691 18:26:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:17.950 18:26:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:17.950 18:26:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:17.950 18:26:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:17.950 18:26:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:18.209 18:26:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:18.209 18:26:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:21:18.209 18:26:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:21:18.468 18:26:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:21:18.727 18:26:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:21:19.665 18:26:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:21:19.665 18:26:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:19.665 18:26:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:19.665 18:26:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:19.924 18:26:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:19.924 18:26:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:19.924 18:26:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:19.924 18:26:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:19.924 18:26:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:19.924 18:26:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:19.924 18:26:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:19.924 18:26:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:20.183 18:26:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:20.183 18:26:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:20.183 18:26:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:20.183 18:26:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:20.441 18:26:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:20.441 18:26:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:20.441 18:26:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:20.441 18:26:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:20.700 18:26:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:20.700 18:26:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:20.700 18:26:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:20.700 18:26:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:20.959 18:26:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:20.959 18:26:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:21:20.959 18:26:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:21:20.959 18:26:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:21:21.218 18:26:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:21:22.161 18:26:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:21:22.161 18:26:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:22.161 18:26:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:22.161 18:26:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:22.421 18:26:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:22.421 18:26:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:22.421 18:26:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:22.421 18:26:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:22.680 18:26:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:22.680 18:26:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:22.680 18:26:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:22.680 18:26:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:22.939 18:26:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:22.939 18:26:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:22.939 18:26:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:22.939 18:26:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:23.199 18:26:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:23.199 18:26:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:23.199 18:26:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:23.199 18:26:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:23.199 18:26:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:23.199 18:26:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:23.199 18:26:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:23.199 18:26:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:23.458 18:26:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:23.458 18:26:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:21:23.458 18:26:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:21:23.718 18:26:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:21:23.977 18:26:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:21:24.915 18:26:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:21:24.915 18:26:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:24.915 18:26:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:24.915 18:26:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:25.174 18:26:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:25.174 18:26:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:25.174 18:26:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:25.174 18:26:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:25.433 18:26:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:25.433 18:26:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:25.433 18:26:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:25.433 18:26:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:25.433 18:26:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:25.433 18:26:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:25.433 18:26:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:25.433 18:26:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:25.693 18:26:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:25.693 18:26:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:25.693 18:26:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:25.693 18:26:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:25.952 18:26:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:25.952 18:26:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:25.952 18:26:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:25.952 18:26:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:26.214 18:26:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:26.214 18:26:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:21:26.214 18:26:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:21:26.474 18:26:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:21:26.474 18:26:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:21:27.852 18:26:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:21:27.852 18:26:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:27.852 18:26:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:27.852 18:26:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:27.852 18:26:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:27.852 18:26:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:27.852 18:26:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:27.852 18:26:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:28.110 18:26:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:28.110 18:26:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:28.110 18:26:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:28.110 18:26:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:28.110 18:26:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:28.110 18:26:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:28.110 18:26:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:28.110 18:26:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:28.370 18:26:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:28.370 18:26:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:28.370 18:26:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:28.370 18:26:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:28.630 18:26:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:28.630 18:26:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:28.630 18:26:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:28.630 18:26:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:28.890 18:26:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:28.890 18:26:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:21:29.150 18:26:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:21:29.150 18:26:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:21:29.150 18:26:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:21:29.410 18:26:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:21:30.893 18:26:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:21:30.893 18:26:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:30.893 18:26:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:30.893 18:26:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:30.893 18:26:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:30.893 18:26:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:30.893 18:26:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:30.893 18:26:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:30.893 18:26:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:30.893 18:26:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:30.893 18:26:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:30.893 18:26:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:31.152 18:26:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:31.152 18:26:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:31.153 18:26:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:31.153 18:26:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:31.412 18:26:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:31.412 18:26:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:31.412 18:26:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:31.412 18:26:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:31.412 18:26:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:31.412 18:26:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:31.412 18:26:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:31.412 18:26:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:31.672 18:26:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:31.672 18:26:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:21:31.672 18:26:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:21:31.931 18:26:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:21:32.190 18:26:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:21:33.129 18:26:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:21:33.129 18:26:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:33.129 18:26:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:33.129 18:26:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:33.388 18:26:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:33.388 18:26:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:33.388 18:26:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:33.388 18:26:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:33.648 18:26:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:33.648 18:26:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:33.648 18:26:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:33.648 18:26:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:33.648 18:26:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:33.648 18:26:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:33.648 18:26:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:33.648 18:26:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:33.908 18:26:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:33.908 18:26:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:33.908 18:26:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:33.908 18:26:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:34.167 18:26:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:34.167 18:26:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:34.167 18:26:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:34.167 18:26:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:34.426 18:26:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:34.426 18:26:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:21:34.426 18:26:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:21:34.685 18:26:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:21:34.945 18:26:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:21:35.884 18:26:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:21:35.884 18:26:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:35.884 18:26:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:35.884 18:26:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:36.143 18:26:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:36.143 18:26:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:36.143 18:26:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:36.143 18:26:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:36.143 18:26:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:36.143 18:26:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:36.143 18:26:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:36.143 18:26:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:36.403 18:26:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:36.403 18:26:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:36.403 18:26:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:36.403 18:26:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:36.662 18:26:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:36.662 18:26:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:36.662 18:26:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:36.662 18:26:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:36.921 18:26:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:36.921 18:26:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:36.921 18:26:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:36.921 18:26:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:37.180 18:26:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:37.180 18:26:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:21:37.180 18:26:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:21:37.180 18:26:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:21:37.439 18:26:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:21:38.378 18:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:21:38.378 18:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:38.637 18:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:38.637 18:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:38.637 18:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:38.637 18:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:38.637 18:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:38.637 18:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:38.897 18:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:38.897 18:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:38.897 18:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:38.897 18:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:39.156 18:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:39.156 18:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:39.156 18:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:39.156 18:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:39.416 18:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:39.416 18:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:39.416 18:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:39.416 18:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:39.675 18:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:39.675 18:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:39.675 18:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:39.675 18:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:39.675 18:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:39.675 18:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3479120 00:21:39.675 18:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 3479120 ']' 00:21:39.675 18:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 3479120 00:21:39.675 18:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:21:39.675 18:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:39.675 18:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3479120 00:21:39.936 18:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:39.936 18:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:39.936 18:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3479120' 00:21:39.936 killing process with pid 3479120 00:21:39.936 18:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 3479120 00:21:39.936 18:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 3479120 00:21:39.936 { 00:21:39.936 "results": [ 00:21:39.936 { 00:21:39.936 "job": "Nvme0n1", 00:21:39.936 "core_mask": "0x4", 00:21:39.936 "workload": "verify", 00:21:39.936 "status": "terminated", 00:21:39.936 "verify_range": { 00:21:39.936 "start": 0, 00:21:39.936 "length": 16384 00:21:39.936 }, 00:21:39.936 "queue_depth": 128, 00:21:39.936 "io_size": 4096, 00:21:39.936 "runtime": 28.830656, 00:21:39.936 "iops": 15867.693055614136, 00:21:39.936 "mibps": 61.98317599849272, 00:21:39.936 "io_failed": 0, 00:21:39.936 "io_timeout": 0, 00:21:39.936 "avg_latency_us": 8047.480034716005, 00:21:39.936 "min_latency_us": 54.53913043478261, 00:21:39.936 "max_latency_us": 3019898.88 00:21:39.936 } 00:21:39.936 ], 00:21:39.936 "core_count": 1 00:21:39.936 } 00:21:39.936 18:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3479120 00:21:39.936 18:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:39.936 [2024-10-08 18:26:22.272408] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:21:39.936 [2024-10-08 18:26:22.272475] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3479120 ] 00:21:39.936 [2024-10-08 18:26:22.358233] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.936 [2024-10-08 18:26:22.438199] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:21:39.936 Running I/O for 90 seconds... 00:21:39.936 18304.00 IOPS, 71.50 MiB/s [2024-10-08T16:26:53.109Z] 18389.00 IOPS, 71.83 MiB/s [2024-10-08T16:26:53.109Z] 18370.67 IOPS, 71.76 MiB/s [2024-10-08T16:26:53.109Z] 18386.00 IOPS, 71.82 MiB/s [2024-10-08T16:26:53.109Z] 18380.80 IOPS, 71.80 MiB/s [2024-10-08T16:26:53.109Z] 18390.00 IOPS, 71.84 MiB/s [2024-10-08T16:26:53.109Z] 18377.71 IOPS, 71.79 MiB/s [2024-10-08T16:26:53.109Z] 18373.12 IOPS, 71.77 MiB/s [2024-10-08T16:26:53.109Z] 18389.33 IOPS, 71.83 MiB/s [2024-10-08T16:26:53.109Z] 18381.20 IOPS, 71.80 MiB/s [2024-10-08T16:26:53.109Z] 18385.45 IOPS, 71.82 MiB/s [2024-10-08T16:26:53.109Z] 18387.33 IOPS, 71.83 MiB/s [2024-10-08T16:26:53.109Z] [2024-10-08 18:26:36.738511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:36352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x181700 00:21:39.936 [2024-10-08 18:26:36.738558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:39.936 [2024-10-08 18:26:36.738616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:36360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x181700 00:21:39.936 [2024-10-08 18:26:36.738628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:39.936 [2024-10-08 18:26:36.738642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:36368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x181700 00:21:39.936 [2024-10-08 18:26:36.738652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:39.936 [2024-10-08 18:26:36.738664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:36376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x181700 00:21:39.936 [2024-10-08 18:26:36.738673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:39.936 [2024-10-08 18:26:36.738685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:36384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x181700 00:21:39.936 [2024-10-08 18:26:36.738695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.936 [2024-10-08 18:26:36.738707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:36392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x181700 00:21:39.936 [2024-10-08 18:26:36.738716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:39.936 [2024-10-08 18:26:36.738728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:36400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x181700 00:21:39.936 [2024-10-08 18:26:36.738738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:39.936 [2024-10-08 18:26:36.738750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:36408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x181700 00:21:39.936 [2024-10-08 18:26:36.738760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:39.936 [2024-10-08 18:26:36.738772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:36416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x181700 00:21:39.936 [2024-10-08 18:26:36.738790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:39.936 [2024-10-08 18:26:36.738802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:36424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x181700 00:21:39.936 [2024-10-08 18:26:36.738812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.738824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:36432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.738834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.738846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:36440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.738855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.738867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:36448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.738877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.738888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.738898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.738910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:36464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.738919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.738931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:36472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.738940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.738952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:36480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.738962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.738974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:36488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.738983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.738996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:36496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:36504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:36512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:36520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:36528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:36544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:36560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:36568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:36584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:36592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:36600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:36608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:36616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:36624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:36640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:36648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:36656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:36696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:36704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:36712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:36720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:36736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:36744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:36752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:36760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:36768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:36776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:36792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:36800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:36808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:36816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:36824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:36832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:36848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:36856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x181700 00:21:39.937 [2024-10-08 18:26:36.739981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.739993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:36896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.937 [2024-10-08 18:26:36.740008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.740025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:36904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.937 [2024-10-08 18:26:36.740034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.740047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:36912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.937 [2024-10-08 18:26:36.740056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:39.937 [2024-10-08 18:26:36.740068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:36920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.740077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.740089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:36928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.740098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.740110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:36936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.740119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.740131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:36944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.740140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.740152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:36952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.740161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.740172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:36960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.740182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.740193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:36968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.740202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.740215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.740224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.740236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:36984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.740245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.740257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:36992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.740266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.740279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:37000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.740288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.740300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:37008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.740309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.740321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.740330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.740341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:37024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.740351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.740363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:37032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.740372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.740384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:37040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.740393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.740405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.740414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.740426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:37056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.740435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.740446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:37064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.740456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.740467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:37072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.740476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.740488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:37080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.740497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.740509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.740518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.740530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:37096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.740544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.740556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.740565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.740577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:37112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.740586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.740598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:37120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.740607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.740619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:37128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.740628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.740640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:37136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.740649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.740661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:37144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.740670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.741019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:36864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x181700 00:21:39.938 [2024-10-08 18:26:36.741031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.741049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:37152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.741059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.741076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:36872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x181700 00:21:39.938 [2024-10-08 18:26:36.741094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.741111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:36880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x181700 00:21:39.938 [2024-10-08 18:26:36.741120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.741137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:36888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x181700 00:21:39.938 [2024-10-08 18:26:36.741147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.741163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.741174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.741392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:37168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.741402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.741419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.741429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.741446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:37184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.741455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.741471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:37192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.741481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.741498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:37200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.741507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.741524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:37208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.741533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.741550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.741559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.741576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:37224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.741585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.741602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:37232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.741611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.741628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:37240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.741637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.741653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.741662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.741679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:37256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.741690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.741707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:37264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.741716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.741733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.741742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.741759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:37280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.741768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.741785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.741794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.741811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:37296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.741820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.741838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:37304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.741847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.741863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:37312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.741872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.741889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:37320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.741898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.741915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:37328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.741924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.741941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:37336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.741950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.741967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.741976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.741993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:37352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.742007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.742026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.742035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:39.938 [2024-10-08 18:26:36.742052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:37368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.938 [2024-10-08 18:26:36.742061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:39.938 17993.85 IOPS, 70.29 MiB/s [2024-10-08T16:26:53.111Z] 16708.57 IOPS, 65.27 MiB/s [2024-10-08T16:26:53.111Z] 15594.67 IOPS, 60.92 MiB/s [2024-10-08T16:26:53.111Z] 14932.50 IOPS, 58.33 MiB/s [2024-10-08T16:26:53.111Z] 15134.82 IOPS, 59.12 MiB/s [2024-10-08T16:26:53.111Z] 15324.67 IOPS, 59.86 MiB/s [2024-10-08T16:26:53.111Z] 15360.74 IOPS, 60.00 MiB/s [2024-10-08T16:26:53.111Z] 15345.25 IOPS, 59.94 MiB/s [2024-10-08T16:26:53.111Z] 15344.00 IOPS, 59.94 MiB/s [2024-10-08T16:26:53.112Z] 15491.14 IOPS, 60.51 MiB/s [2024-10-08T16:26:53.112Z] 15627.43 IOPS, 61.04 MiB/s [2024-10-08T16:26:53.112Z] 15723.92 IOPS, 61.42 MiB/s [2024-10-08T16:26:53.112Z] 15694.80 IOPS, 61.31 MiB/s [2024-10-08T16:26:53.112Z] 15665.12 IOPS, 61.19 MiB/s [2024-10-08T16:26:53.112Z] [2024-10-08 18:26:50.514883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:37464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.939 [2024-10-08 18:26:50.514929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.514965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.939 [2024-10-08 18:26:50.514976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.514988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:36960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x181700 00:21:39.939 [2024-10-08 18:26:50.515007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.515544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:37488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.939 [2024-10-08 18:26:50.515555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.515568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:36992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x181700 00:21:39.939 [2024-10-08 18:26:50.515578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.515590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:37496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.939 [2024-10-08 18:26:50.515599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.515611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:37040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x181700 00:21:39.939 [2024-10-08 18:26:50.515620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.515632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:37512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.939 [2024-10-08 18:26:50.515641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.515653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:37064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x181700 00:21:39.939 [2024-10-08 18:26:50.515670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.515682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:37536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.939 [2024-10-08 18:26:50.515692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.515704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:37544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.939 [2024-10-08 18:26:50.515714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.515726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x181700 00:21:39.939 [2024-10-08 18:26:50.515735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.515747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:37144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x181700 00:21:39.939 [2024-10-08 18:26:50.515757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.515769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x181700 00:21:39.939 [2024-10-08 18:26:50.515778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.515790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:37560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.939 [2024-10-08 18:26:50.515799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.515811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:37568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.939 [2024-10-08 18:26:50.515820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.515832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.939 [2024-10-08 18:26:50.515841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.515853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:37592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.939 [2024-10-08 18:26:50.515863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.515875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x181700 00:21:39.939 [2024-10-08 18:26:50.515885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.515899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:37016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x181700 00:21:39.939 [2024-10-08 18:26:50.515909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.515922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:37608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.939 [2024-10-08 18:26:50.515932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.515944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:37048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x181700 00:21:39.939 [2024-10-08 18:26:50.515954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.515966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:37072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x181700 00:21:39.939 [2024-10-08 18:26:50.515975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.515987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:37080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x181700 00:21:39.939 [2024-10-08 18:26:50.515997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.516014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:37104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x181700 00:21:39.939 [2024-10-08 18:26:50.516024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.516036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:37640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.939 [2024-10-08 18:26:50.516046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.516059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.939 [2024-10-08 18:26:50.516068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.516081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:37168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x181700 00:21:39.939 [2024-10-08 18:26:50.516090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.516102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:37184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x181700 00:21:39.939 [2024-10-08 18:26:50.516111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.516122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:37664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.939 [2024-10-08 18:26:50.516132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.516144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:37216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x181700 00:21:39.939 [2024-10-08 18:26:50.516153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.516165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:37672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.939 [2024-10-08 18:26:50.516174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.516188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:37688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.939 [2024-10-08 18:26:50.516197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.516209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:37256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x181700 00:21:39.939 [2024-10-08 18:26:50.516218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.516230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:37712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.939 [2024-10-08 18:26:50.516240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.516371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:37728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.939 [2024-10-08 18:26:50.516384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.516396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:37296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x181700 00:21:39.939 [2024-10-08 18:26:50.516405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.516417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:37736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.939 [2024-10-08 18:26:50.516426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.516437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.939 [2024-10-08 18:26:50.516447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.516459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:37760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.939 [2024-10-08 18:26:50.516468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.516480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:37768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.939 [2024-10-08 18:26:50.516489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.516500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:37384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x181700 00:21:39.939 [2024-10-08 18:26:50.516509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.516521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:37776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.939 [2024-10-08 18:26:50.516531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.516542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:37792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.939 [2024-10-08 18:26:50.516551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.516565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:37808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.939 [2024-10-08 18:26:50.516575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.516587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:37440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x181700 00:21:39.939 [2024-10-08 18:26:50.516596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.516608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:37824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.939 [2024-10-08 18:26:50.516617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.516629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:37840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.939 [2024-10-08 18:26:50.516638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.516650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:37240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x181700 00:21:39.939 [2024-10-08 18:26:50.516659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.516671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:37248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x181700 00:21:39.939 [2024-10-08 18:26:50.516681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.516692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:37864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.939 [2024-10-08 18:26:50.516701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.516714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:37272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x181700 00:21:39.939 [2024-10-08 18:26:50.516723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:39.939 [2024-10-08 18:26:50.516734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.940 [2024-10-08 18:26:50.516743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:39.940 [2024-10-08 18:26:50.516755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:37888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.940 [2024-10-08 18:26:50.516764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:39.940 [2024-10-08 18:26:50.516776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:37320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x181700 00:21:39.940 [2024-10-08 18:26:50.516785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:39.940 [2024-10-08 18:26:50.516797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x181700 00:21:39.940 [2024-10-08 18:26:50.516808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:39.940 [2024-10-08 18:26:50.516820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:37360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x181700 00:21:39.940 [2024-10-08 18:26:50.516829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:39.940 [2024-10-08 18:26:50.516841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:37904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.940 [2024-10-08 18:26:50.516850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:39.940 [2024-10-08 18:26:50.516861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:37920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.940 [2024-10-08 18:26:50.516870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:39.940 [2024-10-08 18:26:50.516882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x181700 00:21:39.940 [2024-10-08 18:26:50.516891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:39.940 [2024-10-08 18:26:50.516903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:37928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.940 [2024-10-08 18:26:50.516912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:39.940 [2024-10-08 18:26:50.516924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.940 [2024-10-08 18:26:50.516933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:39.940 [2024-10-08 18:26:50.516945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:37432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x181700 00:21:39.940 [2024-10-08 18:26:50.516954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:39.940 [2024-10-08 18:26:50.516966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:37456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x181700 00:21:39.940 [2024-10-08 18:26:50.516975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:39.940 15701.93 IOPS, 61.34 MiB/s [2024-10-08T16:26:53.113Z] 15796.54 IOPS, 61.71 MiB/s [2024-10-08T16:26:53.113Z] Received shutdown signal, test time was about 28.831289 seconds 00:21:39.940 00:21:39.940 Latency(us) 00:21:39.940 [2024-10-08T16:26:53.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.940 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:39.940 Verification LBA range: start 0x0 length 0x4000 00:21:39.940 Nvme0n1 : 28.83 15867.69 61.98 0.00 0.00 8047.48 54.54 3019898.88 00:21:39.940 [2024-10-08T16:26:53.113Z] =================================================================================================================== 00:21:39.940 [2024-10-08T16:26:53.113Z] Total : 15867.69 61.98 0.00 0.00 8047.48 54.54 3019898.88 00:21:39.940 18:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:40.199 18:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:21:40.200 18:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:40.200 18:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:21:40.200 18:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:40.200 18:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:21:40.200 18:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:21:40.200 18:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:21:40.200 18:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:21:40.200 18:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:40.200 18:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:21:40.200 rmmod nvme_rdma 00:21:40.200 rmmod nvme_fabrics 00:21:40.200 18:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:40.200 18:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:21:40.200 18:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:21:40.200 18:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 3478906 ']' 00:21:40.200 18:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 3478906 00:21:40.200 18:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 3478906 ']' 00:21:40.200 18:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 3478906 00:21:40.200 18:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:21:40.200 18:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:40.200 18:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3478906 00:21:40.459 18:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:40.459 18:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:40.459 18:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3478906' 00:21:40.459 killing process with pid 3478906 00:21:40.459 18:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 3478906 00:21:40.459 18:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 3478906 00:21:40.719 18:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:40.719 18:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:21:40.719 00:21:40.719 real 0m40.756s 00:21:40.719 user 1m56.725s 00:21:40.719 sys 0m9.463s 00:21:40.719 18:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:40.719 18:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:40.719 ************************************ 00:21:40.719 END TEST nvmf_host_multipath_status 00:21:40.719 ************************************ 00:21:40.719 18:26:53 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:21:40.719 18:26:53 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:40.719 18:26:53 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:40.719 18:26:53 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.719 ************************************ 00:21:40.719 START TEST nvmf_discovery_remove_ifc 00:21:40.719 ************************************ 00:21:40.719 18:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:21:40.979 * Looking for test storage... 00:21:40.979 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:40.979 18:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:40.979 18:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:21:40.980 18:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:40.980 18:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:40.980 18:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:40.980 18:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:40.980 18:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:40.980 18:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:21:40.980 18:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:21:40.980 18:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:21:40.980 18:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:21:40.980 18:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:21:40.980 18:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:21:40.980 18:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:21:40.980 18:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:40.980 18:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:21:40.980 18:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:21:40.980 18:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:40.980 18:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:40.980 18:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:21:40.980 18:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:21:40.980 18:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:40.980 18:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:21:40.980 18:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:21:40.980 18:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:21:40.980 18:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:21:40.980 18:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:40.980 18:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:21:40.980 18:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:21:40.980 18:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:40.980 18:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:40.980 18:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:21:40.980 18:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:40.980 18:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:40.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.980 --rc genhtml_branch_coverage=1 00:21:40.980 --rc genhtml_function_coverage=1 00:21:40.980 --rc genhtml_legend=1 00:21:40.980 --rc geninfo_all_blocks=1 00:21:40.980 --rc geninfo_unexecuted_blocks=1 00:21:40.980 00:21:40.980 ' 00:21:40.980 18:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:40.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.980 --rc genhtml_branch_coverage=1 00:21:40.980 --rc genhtml_function_coverage=1 00:21:40.980 --rc genhtml_legend=1 00:21:40.980 --rc geninfo_all_blocks=1 00:21:40.980 --rc geninfo_unexecuted_blocks=1 00:21:40.980 00:21:40.980 ' 00:21:40.980 18:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:40.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.980 --rc genhtml_branch_coverage=1 00:21:40.980 --rc genhtml_function_coverage=1 00:21:40.980 --rc genhtml_legend=1 00:21:40.980 --rc geninfo_all_blocks=1 00:21:40.980 --rc geninfo_unexecuted_blocks=1 00:21:40.980 00:21:40.980 ' 00:21:40.980 18:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:40.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.980 --rc genhtml_branch_coverage=1 00:21:40.980 --rc genhtml_function_coverage=1 00:21:40.980 --rc genhtml_legend=1 00:21:40.980 --rc geninfo_all_blocks=1 00:21:40.980 --rc geninfo_unexecuted_blocks=1 00:21:40.980 00:21:40.980 ' 00:21:40.980 18:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:40.980 18:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:21:40.980 18:26:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:40.980 18:26:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:40.980 18:26:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:40.980 18:26:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:40.980 18:26:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:40.980 18:26:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:40.980 18:26:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:40.980 18:26:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:40.980 18:26:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:40.980 18:26:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:40.980 18:26:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:21:40.980 18:26:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:21:40.980 18:26:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:40.980 18:26:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:40.980 18:26:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:40.980 18:26:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:40.980 18:26:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:40.980 18:26:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:21:40.980 18:26:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:40.980 18:26:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:40.980 18:26:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:40.980 18:26:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.980 18:26:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.980 18:26:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.980 18:26:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:21:40.981 18:26:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.981 18:26:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:21:40.981 18:26:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:40.981 18:26:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:40.981 18:26:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:40.981 18:26:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:40.981 18:26:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:40.981 18:26:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:40.981 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:40.981 18:26:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:40.981 18:26:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:40.981 18:26:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:40.981 18:26:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:21:40.981 18:26:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:21:40.981 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:21:40.981 18:26:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:21:40.981 00:21:40.981 real 0m0.236s 00:21:40.981 user 0m0.134s 00:21:40.981 sys 0m0.118s 00:21:40.981 18:26:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:40.981 18:26:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:40.981 ************************************ 00:21:40.981 END TEST nvmf_discovery_remove_ifc 00:21:40.981 ************************************ 00:21:40.981 18:26:54 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:21:40.981 18:26:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:40.981 18:26:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:40.981 18:26:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.981 ************************************ 00:21:40.981 START TEST nvmf_identify_kernel_target 00:21:40.981 ************************************ 00:21:40.981 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:21:41.241 * Looking for test storage... 00:21:41.241 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:41.241 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:41.241 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:21:41.241 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:41.241 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:41.241 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:41.241 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:41.241 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:41.241 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:21:41.241 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:21:41.241 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:21:41.241 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:21:41.241 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:21:41.241 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:21:41.241 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:21:41.241 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:41.241 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:21:41.241 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:21:41.241 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:41.241 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:41.241 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:21:41.241 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:21:41.241 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:41.241 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:21:41.241 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:21:41.241 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:21:41.241 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:21:41.241 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:41.241 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:21:41.241 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:21:41.241 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:41.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.242 --rc genhtml_branch_coverage=1 00:21:41.242 --rc genhtml_function_coverage=1 00:21:41.242 --rc genhtml_legend=1 00:21:41.242 --rc geninfo_all_blocks=1 00:21:41.242 --rc geninfo_unexecuted_blocks=1 00:21:41.242 00:21:41.242 ' 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:41.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.242 --rc genhtml_branch_coverage=1 00:21:41.242 --rc genhtml_function_coverage=1 00:21:41.242 --rc genhtml_legend=1 00:21:41.242 --rc geninfo_all_blocks=1 00:21:41.242 --rc geninfo_unexecuted_blocks=1 00:21:41.242 00:21:41.242 ' 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:41.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.242 --rc genhtml_branch_coverage=1 00:21:41.242 --rc genhtml_function_coverage=1 00:21:41.242 --rc genhtml_legend=1 00:21:41.242 --rc geninfo_all_blocks=1 00:21:41.242 --rc geninfo_unexecuted_blocks=1 00:21:41.242 00:21:41.242 ' 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:41.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.242 --rc genhtml_branch_coverage=1 00:21:41.242 --rc genhtml_function_coverage=1 00:21:41.242 --rc genhtml_legend=1 00:21:41.242 --rc geninfo_all_blocks=1 00:21:41.242 --rc geninfo_unexecuted_blocks=1 00:21:41.242 00:21:41.242 ' 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:41.242 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:21:41.242 18:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:21:47.818 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:21:47.818 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:21:47.818 Found net devices under 0000:18:00.0: mlx_0_0 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:21:47.818 Found net devices under 0000:18:00.1: mlx_0_1 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # is_hw=yes 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # rdma_device_init 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # uname 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:21:47.818 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:21:48.079 18:27:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:21:48.079 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:21:48.079 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:21:48.079 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:21:48.079 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:21:48.079 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:21:48.079 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@528 -- # allocate_nic_ips 00:21:48.079 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:48.079 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:21:48.079 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:48.079 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:48.079 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:48.079 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:48.079 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:48.079 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:48.079 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:48.079 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:48.079 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:48.079 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:21:48.079 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:48.079 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:48.079 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:48.079 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:48.079 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:48.079 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:48.079 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:21:48.079 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:48.079 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:21:48.080 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:48.080 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:21:48.080 altname enp24s0f0np0 00:21:48.080 altname ens785f0np0 00:21:48.080 inet 192.168.100.8/24 scope global mlx_0_0 00:21:48.080 valid_lft forever preferred_lft forever 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:21:48.080 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:48.080 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:21:48.080 altname enp24s0f1np1 00:21:48.080 altname ens785f1np1 00:21:48.080 inet 192.168.100.9/24 scope global mlx_0_1 00:21:48.080 valid_lft forever preferred_lft forever 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # return 0 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:21:48.080 192.168.100.9' 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:21:48.080 192.168.100.9' 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # head -n 1 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:21:48.080 192.168.100.9' 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # tail -n +2 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # head -n 1 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:21:48.080 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:21:48.081 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:21:48.341 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:48.341 18:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:21:51.628 Waiting for block devices as requested 00:21:51.628 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:21:51.628 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:21:51.628 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:21:51.888 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:21:51.888 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:21:51.888 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:21:52.147 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:21:52.147 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:21:52.147 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:21:52.406 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:21:52.406 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:21:52.406 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:21:52.665 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:21:52.665 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:21:52.665 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:21:52.924 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:21:52.924 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:21:53.184 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:21:53.184 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:53.184 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:21:53.184 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:21:53.184 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:53.184 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:53.184 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:21:53.184 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:21:53.184 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:21:53.184 No valid GPT data, bailing 00:21:53.184 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:53.184 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:21:53.184 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:21:53.184 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:21:53.184 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:21:53.184 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:53.184 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:53.184 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:53.184 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:53.184 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:21:53.184 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:21:53.184 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:21:53.184 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 192.168.100.8 00:21:53.184 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo rdma 00:21:53.184 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:21:53.184 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:21:53.184 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:53.184 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:21:53.444 00:21:53.444 Discovery Log Number of Records 2, Generation counter 2 00:21:53.444 =====Discovery Log Entry 0====== 00:21:53.444 trtype: rdma 00:21:53.444 adrfam: ipv4 00:21:53.444 subtype: current discovery subsystem 00:21:53.444 treq: not specified, sq flow control disable supported 00:21:53.444 portid: 1 00:21:53.444 trsvcid: 4420 00:21:53.444 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:53.444 traddr: 192.168.100.8 00:21:53.444 eflags: none 00:21:53.444 rdma_prtype: not specified 00:21:53.444 rdma_qptype: connected 00:21:53.444 rdma_cms: rdma-cm 00:21:53.444 rdma_pkey: 0x0000 00:21:53.444 =====Discovery Log Entry 1====== 00:21:53.444 trtype: rdma 00:21:53.444 adrfam: ipv4 00:21:53.444 subtype: nvme subsystem 00:21:53.444 treq: not specified, sq flow control disable supported 00:21:53.444 portid: 1 00:21:53.444 trsvcid: 4420 00:21:53.444 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:53.444 traddr: 192.168.100.8 00:21:53.444 eflags: none 00:21:53.444 rdma_prtype: not specified 00:21:53.444 rdma_qptype: connected 00:21:53.444 rdma_cms: rdma-cm 00:21:53.444 rdma_pkey: 0x0000 00:21:53.444 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:21:53.444 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:21:53.444 ===================================================== 00:21:53.444 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:53.444 ===================================================== 00:21:53.444 Controller Capabilities/Features 00:21:53.444 ================================ 00:21:53.444 Vendor ID: 0000 00:21:53.444 Subsystem Vendor ID: 0000 00:21:53.444 Serial Number: 35aa644023d66c072305 00:21:53.444 Model Number: Linux 00:21:53.444 Firmware Version: 6.8.9-20 00:21:53.444 Recommended Arb Burst: 0 00:21:53.444 IEEE OUI Identifier: 00 00 00 00:21:53.444 Multi-path I/O 00:21:53.445 May have multiple subsystem ports: No 00:21:53.445 May have multiple controllers: No 00:21:53.445 Associated with SR-IOV VF: No 00:21:53.445 Max Data Transfer Size: Unlimited 00:21:53.445 Max Number of Namespaces: 0 00:21:53.445 Max Number of I/O Queues: 1024 00:21:53.445 NVMe Specification Version (VS): 1.3 00:21:53.445 NVMe Specification Version (Identify): 1.3 00:21:53.445 Maximum Queue Entries: 128 00:21:53.445 Contiguous Queues Required: No 00:21:53.445 Arbitration Mechanisms Supported 00:21:53.445 Weighted Round Robin: Not Supported 00:21:53.445 Vendor Specific: Not Supported 00:21:53.445 Reset Timeout: 7500 ms 00:21:53.445 Doorbell Stride: 4 bytes 00:21:53.445 NVM Subsystem Reset: Not Supported 00:21:53.445 Command Sets Supported 00:21:53.445 NVM Command Set: Supported 00:21:53.445 Boot Partition: Not Supported 00:21:53.445 Memory Page Size Minimum: 4096 bytes 00:21:53.445 Memory Page Size Maximum: 4096 bytes 00:21:53.445 Persistent Memory Region: Not Supported 00:21:53.445 Optional Asynchronous Events Supported 00:21:53.445 Namespace Attribute Notices: Not Supported 00:21:53.445 Firmware Activation Notices: Not Supported 00:21:53.445 ANA Change Notices: Not Supported 00:21:53.445 PLE Aggregate Log Change Notices: Not Supported 00:21:53.445 LBA Status Info Alert Notices: Not Supported 00:21:53.445 EGE Aggregate Log Change Notices: Not Supported 00:21:53.445 Normal NVM Subsystem Shutdown event: Not Supported 00:21:53.445 Zone Descriptor Change Notices: Not Supported 00:21:53.445 Discovery Log Change Notices: Supported 00:21:53.445 Controller Attributes 00:21:53.445 128-bit Host Identifier: Not Supported 00:21:53.445 Non-Operational Permissive Mode: Not Supported 00:21:53.445 NVM Sets: Not Supported 00:21:53.445 Read Recovery Levels: Not Supported 00:21:53.445 Endurance Groups: Not Supported 00:21:53.445 Predictable Latency Mode: Not Supported 00:21:53.445 Traffic Based Keep ALive: Not Supported 00:21:53.445 Namespace Granularity: Not Supported 00:21:53.445 SQ Associations: Not Supported 00:21:53.445 UUID List: Not Supported 00:21:53.445 Multi-Domain Subsystem: Not Supported 00:21:53.445 Fixed Capacity Management: Not Supported 00:21:53.445 Variable Capacity Management: Not Supported 00:21:53.445 Delete Endurance Group: Not Supported 00:21:53.445 Delete NVM Set: Not Supported 00:21:53.445 Extended LBA Formats Supported: Not Supported 00:21:53.445 Flexible Data Placement Supported: Not Supported 00:21:53.445 00:21:53.445 Controller Memory Buffer Support 00:21:53.445 ================================ 00:21:53.445 Supported: No 00:21:53.445 00:21:53.445 Persistent Memory Region Support 00:21:53.445 ================================ 00:21:53.445 Supported: No 00:21:53.445 00:21:53.445 Admin Command Set Attributes 00:21:53.445 ============================ 00:21:53.445 Security Send/Receive: Not Supported 00:21:53.445 Format NVM: Not Supported 00:21:53.445 Firmware Activate/Download: Not Supported 00:21:53.445 Namespace Management: Not Supported 00:21:53.445 Device Self-Test: Not Supported 00:21:53.445 Directives: Not Supported 00:21:53.445 NVMe-MI: Not Supported 00:21:53.445 Virtualization Management: Not Supported 00:21:53.445 Doorbell Buffer Config: Not Supported 00:21:53.445 Get LBA Status Capability: Not Supported 00:21:53.445 Command & Feature Lockdown Capability: Not Supported 00:21:53.445 Abort Command Limit: 1 00:21:53.445 Async Event Request Limit: 1 00:21:53.445 Number of Firmware Slots: N/A 00:21:53.445 Firmware Slot 1 Read-Only: N/A 00:21:53.445 Firmware Activation Without Reset: N/A 00:21:53.445 Multiple Update Detection Support: N/A 00:21:53.445 Firmware Update Granularity: No Information Provided 00:21:53.445 Per-Namespace SMART Log: No 00:21:53.445 Asymmetric Namespace Access Log Page: Not Supported 00:21:53.445 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:53.445 Command Effects Log Page: Not Supported 00:21:53.445 Get Log Page Extended Data: Supported 00:21:53.445 Telemetry Log Pages: Not Supported 00:21:53.445 Persistent Event Log Pages: Not Supported 00:21:53.445 Supported Log Pages Log Page: May Support 00:21:53.445 Commands Supported & Effects Log Page: Not Supported 00:21:53.445 Feature Identifiers & Effects Log Page:May Support 00:21:53.445 NVMe-MI Commands & Effects Log Page: May Support 00:21:53.445 Data Area 4 for Telemetry Log: Not Supported 00:21:53.445 Error Log Page Entries Supported: 1 00:21:53.445 Keep Alive: Not Supported 00:21:53.445 00:21:53.445 NVM Command Set Attributes 00:21:53.445 ========================== 00:21:53.445 Submission Queue Entry Size 00:21:53.445 Max: 1 00:21:53.445 Min: 1 00:21:53.445 Completion Queue Entry Size 00:21:53.445 Max: 1 00:21:53.445 Min: 1 00:21:53.445 Number of Namespaces: 0 00:21:53.445 Compare Command: Not Supported 00:21:53.445 Write Uncorrectable Command: Not Supported 00:21:53.445 Dataset Management Command: Not Supported 00:21:53.445 Write Zeroes Command: Not Supported 00:21:53.445 Set Features Save Field: Not Supported 00:21:53.445 Reservations: Not Supported 00:21:53.445 Timestamp: Not Supported 00:21:53.445 Copy: Not Supported 00:21:53.445 Volatile Write Cache: Not Present 00:21:53.445 Atomic Write Unit (Normal): 1 00:21:53.445 Atomic Write Unit (PFail): 1 00:21:53.445 Atomic Compare & Write Unit: 1 00:21:53.445 Fused Compare & Write: Not Supported 00:21:53.445 Scatter-Gather List 00:21:53.445 SGL Command Set: Supported 00:21:53.445 SGL Keyed: Supported 00:21:53.445 SGL Bit Bucket Descriptor: Not Supported 00:21:53.445 SGL Metadata Pointer: Not Supported 00:21:53.445 Oversized SGL: Not Supported 00:21:53.445 SGL Metadata Address: Not Supported 00:21:53.445 SGL Offset: Supported 00:21:53.445 Transport SGL Data Block: Not Supported 00:21:53.445 Replay Protected Memory Block: Not Supported 00:21:53.445 00:21:53.445 Firmware Slot Information 00:21:53.445 ========================= 00:21:53.445 Active slot: 0 00:21:53.445 00:21:53.445 00:21:53.445 Error Log 00:21:53.445 ========= 00:21:53.445 00:21:53.445 Active Namespaces 00:21:53.445 ================= 00:21:53.445 Discovery Log Page 00:21:53.445 ================== 00:21:53.445 Generation Counter: 2 00:21:53.445 Number of Records: 2 00:21:53.445 Record Format: 0 00:21:53.445 00:21:53.445 Discovery Log Entry 0 00:21:53.445 ---------------------- 00:21:53.445 Transport Type: 1 (RDMA) 00:21:53.445 Address Family: 1 (IPv4) 00:21:53.445 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:53.445 Entry Flags: 00:21:53.445 Duplicate Returned Information: 0 00:21:53.445 Explicit Persistent Connection Support for Discovery: 0 00:21:53.445 Transport Requirements: 00:21:53.445 Secure Channel: Not Specified 00:21:53.445 Port ID: 1 (0x0001) 00:21:53.445 Controller ID: 65535 (0xffff) 00:21:53.445 Admin Max SQ Size: 32 00:21:53.445 Transport Service Identifier: 4420 00:21:53.445 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:53.445 Transport Address: 192.168.100.8 00:21:53.445 Transport Specific Address Subtype - RDMA 00:21:53.445 RDMA QP Service Type: 1 (Reliable Connected) 00:21:53.445 RDMA Provider Type: 1 (No provider specified) 00:21:53.445 RDMA CM Service: 1 (RDMA_CM) 00:21:53.445 Discovery Log Entry 1 00:21:53.445 ---------------------- 00:21:53.445 Transport Type: 1 (RDMA) 00:21:53.445 Address Family: 1 (IPv4) 00:21:53.445 Subsystem Type: 2 (NVM Subsystem) 00:21:53.445 Entry Flags: 00:21:53.445 Duplicate Returned Information: 0 00:21:53.445 Explicit Persistent Connection Support for Discovery: 0 00:21:53.445 Transport Requirements: 00:21:53.445 Secure Channel: Not Specified 00:21:53.445 Port ID: 1 (0x0001) 00:21:53.445 Controller ID: 65535 (0xffff) 00:21:53.445 Admin Max SQ Size: 32 00:21:53.445 Transport Service Identifier: 4420 00:21:53.445 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:21:53.445 Transport Address: 192.168.100.8 00:21:53.445 Transport Specific Address Subtype - RDMA 00:21:53.445 RDMA QP Service Type: 1 (Reliable Connected) 00:21:53.706 RDMA Provider Type: 1 (No provider specified) 00:21:53.706 RDMA CM Service: 1 (RDMA_CM) 00:21:53.706 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:53.706 get_feature(0x01) failed 00:21:53.706 get_feature(0x02) failed 00:21:53.706 get_feature(0x04) failed 00:21:53.706 ===================================================== 00:21:53.706 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:21:53.706 ===================================================== 00:21:53.706 Controller Capabilities/Features 00:21:53.706 ================================ 00:21:53.706 Vendor ID: 0000 00:21:53.706 Subsystem Vendor ID: 0000 00:21:53.706 Serial Number: 9f36469e6f780bdf91ad 00:21:53.706 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:21:53.706 Firmware Version: 6.8.9-20 00:21:53.706 Recommended Arb Burst: 6 00:21:53.706 IEEE OUI Identifier: 00 00 00 00:21:53.706 Multi-path I/O 00:21:53.706 May have multiple subsystem ports: Yes 00:21:53.706 May have multiple controllers: Yes 00:21:53.706 Associated with SR-IOV VF: No 00:21:53.706 Max Data Transfer Size: 1048576 00:21:53.706 Max Number of Namespaces: 1024 00:21:53.706 Max Number of I/O Queues: 128 00:21:53.706 NVMe Specification Version (VS): 1.3 00:21:53.706 NVMe Specification Version (Identify): 1.3 00:21:53.706 Maximum Queue Entries: 128 00:21:53.706 Contiguous Queues Required: No 00:21:53.706 Arbitration Mechanisms Supported 00:21:53.706 Weighted Round Robin: Not Supported 00:21:53.706 Vendor Specific: Not Supported 00:21:53.706 Reset Timeout: 7500 ms 00:21:53.706 Doorbell Stride: 4 bytes 00:21:53.706 NVM Subsystem Reset: Not Supported 00:21:53.706 Command Sets Supported 00:21:53.706 NVM Command Set: Supported 00:21:53.706 Boot Partition: Not Supported 00:21:53.706 Memory Page Size Minimum: 4096 bytes 00:21:53.706 Memory Page Size Maximum: 4096 bytes 00:21:53.706 Persistent Memory Region: Not Supported 00:21:53.706 Optional Asynchronous Events Supported 00:21:53.706 Namespace Attribute Notices: Supported 00:21:53.706 Firmware Activation Notices: Not Supported 00:21:53.706 ANA Change Notices: Supported 00:21:53.706 PLE Aggregate Log Change Notices: Not Supported 00:21:53.706 LBA Status Info Alert Notices: Not Supported 00:21:53.706 EGE Aggregate Log Change Notices: Not Supported 00:21:53.706 Normal NVM Subsystem Shutdown event: Not Supported 00:21:53.706 Zone Descriptor Change Notices: Not Supported 00:21:53.706 Discovery Log Change Notices: Not Supported 00:21:53.706 Controller Attributes 00:21:53.706 128-bit Host Identifier: Supported 00:21:53.706 Non-Operational Permissive Mode: Not Supported 00:21:53.706 NVM Sets: Not Supported 00:21:53.706 Read Recovery Levels: Not Supported 00:21:53.706 Endurance Groups: Not Supported 00:21:53.706 Predictable Latency Mode: Not Supported 00:21:53.706 Traffic Based Keep ALive: Supported 00:21:53.706 Namespace Granularity: Not Supported 00:21:53.706 SQ Associations: Not Supported 00:21:53.706 UUID List: Not Supported 00:21:53.706 Multi-Domain Subsystem: Not Supported 00:21:53.706 Fixed Capacity Management: Not Supported 00:21:53.706 Variable Capacity Management: Not Supported 00:21:53.706 Delete Endurance Group: Not Supported 00:21:53.706 Delete NVM Set: Not Supported 00:21:53.706 Extended LBA Formats Supported: Not Supported 00:21:53.706 Flexible Data Placement Supported: Not Supported 00:21:53.706 00:21:53.706 Controller Memory Buffer Support 00:21:53.706 ================================ 00:21:53.706 Supported: No 00:21:53.706 00:21:53.706 Persistent Memory Region Support 00:21:53.706 ================================ 00:21:53.706 Supported: No 00:21:53.706 00:21:53.706 Admin Command Set Attributes 00:21:53.706 ============================ 00:21:53.706 Security Send/Receive: Not Supported 00:21:53.706 Format NVM: Not Supported 00:21:53.706 Firmware Activate/Download: Not Supported 00:21:53.706 Namespace Management: Not Supported 00:21:53.706 Device Self-Test: Not Supported 00:21:53.706 Directives: Not Supported 00:21:53.706 NVMe-MI: Not Supported 00:21:53.706 Virtualization Management: Not Supported 00:21:53.706 Doorbell Buffer Config: Not Supported 00:21:53.706 Get LBA Status Capability: Not Supported 00:21:53.706 Command & Feature Lockdown Capability: Not Supported 00:21:53.706 Abort Command Limit: 4 00:21:53.706 Async Event Request Limit: 4 00:21:53.706 Number of Firmware Slots: N/A 00:21:53.706 Firmware Slot 1 Read-Only: N/A 00:21:53.706 Firmware Activation Without Reset: N/A 00:21:53.706 Multiple Update Detection Support: N/A 00:21:53.706 Firmware Update Granularity: No Information Provided 00:21:53.706 Per-Namespace SMART Log: Yes 00:21:53.706 Asymmetric Namespace Access Log Page: Supported 00:21:53.706 ANA Transition Time : 10 sec 00:21:53.706 00:21:53.706 Asymmetric Namespace Access Capabilities 00:21:53.706 ANA Optimized State : Supported 00:21:53.706 ANA Non-Optimized State : Supported 00:21:53.706 ANA Inaccessible State : Supported 00:21:53.706 ANA Persistent Loss State : Supported 00:21:53.706 ANA Change State : Supported 00:21:53.706 ANAGRPID is not changed : No 00:21:53.706 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:21:53.706 00:21:53.706 ANA Group Identifier Maximum : 128 00:21:53.706 Number of ANA Group Identifiers : 128 00:21:53.706 Max Number of Allowed Namespaces : 1024 00:21:53.706 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:21:53.706 Command Effects Log Page: Supported 00:21:53.706 Get Log Page Extended Data: Supported 00:21:53.706 Telemetry Log Pages: Not Supported 00:21:53.706 Persistent Event Log Pages: Not Supported 00:21:53.706 Supported Log Pages Log Page: May Support 00:21:53.706 Commands Supported & Effects Log Page: Not Supported 00:21:53.706 Feature Identifiers & Effects Log Page:May Support 00:21:53.706 NVMe-MI Commands & Effects Log Page: May Support 00:21:53.706 Data Area 4 for Telemetry Log: Not Supported 00:21:53.706 Error Log Page Entries Supported: 128 00:21:53.706 Keep Alive: Supported 00:21:53.706 Keep Alive Granularity: 1000 ms 00:21:53.706 00:21:53.706 NVM Command Set Attributes 00:21:53.706 ========================== 00:21:53.706 Submission Queue Entry Size 00:21:53.706 Max: 64 00:21:53.706 Min: 64 00:21:53.706 Completion Queue Entry Size 00:21:53.706 Max: 16 00:21:53.706 Min: 16 00:21:53.706 Number of Namespaces: 1024 00:21:53.706 Compare Command: Not Supported 00:21:53.706 Write Uncorrectable Command: Not Supported 00:21:53.706 Dataset Management Command: Supported 00:21:53.706 Write Zeroes Command: Supported 00:21:53.706 Set Features Save Field: Not Supported 00:21:53.706 Reservations: Not Supported 00:21:53.706 Timestamp: Not Supported 00:21:53.706 Copy: Not Supported 00:21:53.706 Volatile Write Cache: Present 00:21:53.706 Atomic Write Unit (Normal): 1 00:21:53.706 Atomic Write Unit (PFail): 1 00:21:53.706 Atomic Compare & Write Unit: 1 00:21:53.706 Fused Compare & Write: Not Supported 00:21:53.706 Scatter-Gather List 00:21:53.706 SGL Command Set: Supported 00:21:53.707 SGL Keyed: Supported 00:21:53.707 SGL Bit Bucket Descriptor: Not Supported 00:21:53.707 SGL Metadata Pointer: Not Supported 00:21:53.707 Oversized SGL: Not Supported 00:21:53.707 SGL Metadata Address: Not Supported 00:21:53.707 SGL Offset: Supported 00:21:53.707 Transport SGL Data Block: Not Supported 00:21:53.707 Replay Protected Memory Block: Not Supported 00:21:53.707 00:21:53.707 Firmware Slot Information 00:21:53.707 ========================= 00:21:53.707 Active slot: 0 00:21:53.707 00:21:53.707 Asymmetric Namespace Access 00:21:53.707 =========================== 00:21:53.707 Change Count : 0 00:21:53.707 Number of ANA Group Descriptors : 1 00:21:53.707 ANA Group Descriptor : 0 00:21:53.707 ANA Group ID : 1 00:21:53.707 Number of NSID Values : 1 00:21:53.707 Change Count : 0 00:21:53.707 ANA State : 1 00:21:53.707 Namespace Identifier : 1 00:21:53.707 00:21:53.707 Commands Supported and Effects 00:21:53.707 ============================== 00:21:53.707 Admin Commands 00:21:53.707 -------------- 00:21:53.707 Get Log Page (02h): Supported 00:21:53.707 Identify (06h): Supported 00:21:53.707 Abort (08h): Supported 00:21:53.707 Set Features (09h): Supported 00:21:53.707 Get Features (0Ah): Supported 00:21:53.707 Asynchronous Event Request (0Ch): Supported 00:21:53.707 Keep Alive (18h): Supported 00:21:53.707 I/O Commands 00:21:53.707 ------------ 00:21:53.707 Flush (00h): Supported 00:21:53.707 Write (01h): Supported LBA-Change 00:21:53.707 Read (02h): Supported 00:21:53.707 Write Zeroes (08h): Supported LBA-Change 00:21:53.707 Dataset Management (09h): Supported 00:21:53.707 00:21:53.707 Error Log 00:21:53.707 ========= 00:21:53.707 Entry: 0 00:21:53.707 Error Count: 0x3 00:21:53.707 Submission Queue Id: 0x0 00:21:53.707 Command Id: 0x5 00:21:53.707 Phase Bit: 0 00:21:53.707 Status Code: 0x2 00:21:53.707 Status Code Type: 0x0 00:21:53.707 Do Not Retry: 1 00:21:53.707 Error Location: 0x28 00:21:53.707 LBA: 0x0 00:21:53.707 Namespace: 0x0 00:21:53.707 Vendor Log Page: 0x0 00:21:53.707 ----------- 00:21:53.707 Entry: 1 00:21:53.707 Error Count: 0x2 00:21:53.707 Submission Queue Id: 0x0 00:21:53.707 Command Id: 0x5 00:21:53.707 Phase Bit: 0 00:21:53.707 Status Code: 0x2 00:21:53.707 Status Code Type: 0x0 00:21:53.707 Do Not Retry: 1 00:21:53.707 Error Location: 0x28 00:21:53.707 LBA: 0x0 00:21:53.707 Namespace: 0x0 00:21:53.707 Vendor Log Page: 0x0 00:21:53.707 ----------- 00:21:53.707 Entry: 2 00:21:53.707 Error Count: 0x1 00:21:53.707 Submission Queue Id: 0x0 00:21:53.707 Command Id: 0x0 00:21:53.707 Phase Bit: 0 00:21:53.707 Status Code: 0x2 00:21:53.707 Status Code Type: 0x0 00:21:53.707 Do Not Retry: 1 00:21:53.707 Error Location: 0x28 00:21:53.707 LBA: 0x0 00:21:53.707 Namespace: 0x0 00:21:53.707 Vendor Log Page: 0x0 00:21:53.707 00:21:53.707 Number of Queues 00:21:53.707 ================ 00:21:53.707 Number of I/O Submission Queues: 128 00:21:53.707 Number of I/O Completion Queues: 128 00:21:53.707 00:21:53.707 ZNS Specific Controller Data 00:21:53.707 ============================ 00:21:53.707 Zone Append Size Limit: 0 00:21:53.707 00:21:53.707 00:21:53.707 Active Namespaces 00:21:53.707 ================= 00:21:53.707 get_feature(0x05) failed 00:21:53.707 Namespace ID:1 00:21:53.707 Command Set Identifier: NVM (00h) 00:21:53.707 Deallocate: Supported 00:21:53.707 Deallocated/Unwritten Error: Not Supported 00:21:53.707 Deallocated Read Value: Unknown 00:21:53.707 Deallocate in Write Zeroes: Not Supported 00:21:53.707 Deallocated Guard Field: 0xFFFF 00:21:53.707 Flush: Supported 00:21:53.707 Reservation: Not Supported 00:21:53.707 Namespace Sharing Capabilities: Multiple Controllers 00:21:53.707 Size (in LBAs): 7814037168 (3726GiB) 00:21:53.707 Capacity (in LBAs): 7814037168 (3726GiB) 00:21:53.707 Utilization (in LBAs): 7814037168 (3726GiB) 00:21:53.707 UUID: a5e8b1a2-ffa9-487e-91ba-dc18eb5ba35f 00:21:53.707 Thin Provisioning: Not Supported 00:21:53.707 Per-NS Atomic Units: Yes 00:21:53.707 Atomic Boundary Size (Normal): 0 00:21:53.707 Atomic Boundary Size (PFail): 0 00:21:53.707 Atomic Boundary Offset: 0 00:21:53.707 NGUID/EUI64 Never Reused: No 00:21:53.707 ANA group ID: 1 00:21:53.707 Namespace Write Protected: No 00:21:53.707 Number of LBA Formats: 1 00:21:53.707 Current LBA Format: LBA Format #00 00:21:53.707 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:53.707 00:21:53.707 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:21:53.707 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:53.707 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:21:53.707 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:21:53.707 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:21:53.707 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:21:53.707 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:53.707 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:21:53.707 rmmod nvme_rdma 00:21:53.707 rmmod nvme_fabrics 00:21:53.707 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:53.707 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:21:53.707 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:21:53.707 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:21:53.707 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:53.707 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:21:53.707 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:21:53.707 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:53.707 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:21:53.707 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:53.707 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:53.707 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:53.707 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:53.707 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:21:53.707 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_rdma nvmet 00:21:53.967 18:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:21:57.261 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:21:57.261 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:21:57.261 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:21:57.261 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:21:57.261 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:21:57.261 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:21:57.261 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:21:57.261 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:21:57.261 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:21:57.261 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:21:57.261 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:21:57.261 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:21:57.261 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:21:57.261 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:21:57.261 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:21:57.261 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:22:00.552 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:22:00.552 00:22:00.552 real 0m19.470s 00:22:00.552 user 0m4.966s 00:22:00.552 sys 0m10.718s 00:22:00.552 18:27:13 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:00.552 18:27:13 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.552 ************************************ 00:22:00.552 END TEST nvmf_identify_kernel_target 00:22:00.552 ************************************ 00:22:00.552 18:27:13 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:22:00.552 18:27:13 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:00.552 18:27:13 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:00.552 18:27:13 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.552 ************************************ 00:22:00.552 START TEST nvmf_auth_host 00:22:00.552 ************************************ 00:22:00.552 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:22:00.813 * Looking for test storage... 00:22:00.813 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:00.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.813 --rc genhtml_branch_coverage=1 00:22:00.813 --rc genhtml_function_coverage=1 00:22:00.813 --rc genhtml_legend=1 00:22:00.813 --rc geninfo_all_blocks=1 00:22:00.813 --rc geninfo_unexecuted_blocks=1 00:22:00.813 00:22:00.813 ' 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:00.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.813 --rc genhtml_branch_coverage=1 00:22:00.813 --rc genhtml_function_coverage=1 00:22:00.813 --rc genhtml_legend=1 00:22:00.813 --rc geninfo_all_blocks=1 00:22:00.813 --rc geninfo_unexecuted_blocks=1 00:22:00.813 00:22:00.813 ' 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:00.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.813 --rc genhtml_branch_coverage=1 00:22:00.813 --rc genhtml_function_coverage=1 00:22:00.813 --rc genhtml_legend=1 00:22:00.813 --rc geninfo_all_blocks=1 00:22:00.813 --rc geninfo_unexecuted_blocks=1 00:22:00.813 00:22:00.813 ' 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:00.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.813 --rc genhtml_branch_coverage=1 00:22:00.813 --rc genhtml_function_coverage=1 00:22:00.813 --rc genhtml_legend=1 00:22:00.813 --rc geninfo_all_blocks=1 00:22:00.813 --rc geninfo_unexecuted_blocks=1 00:22:00.813 00:22:00.813 ' 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:00.813 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:00.814 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:00.814 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:00.814 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:00.814 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:00.814 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:00.814 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:00.814 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:22:00.814 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:22:00.814 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:22:00.814 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:22:00.814 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:00.814 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:00.814 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:22:00.814 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:22:00.814 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:22:00.814 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:22:00.814 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:00.814 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:00.814 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:00.814 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:00.814 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.814 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:00.814 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.814 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:00.814 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:00.814 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:22:00.814 18:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:22:07.442 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:22:07.442 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:22:07.442 Found net devices under 0000:18:00.0: mlx_0_0 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:22:07.442 Found net devices under 0000:18:00.1: mlx_0_1 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # is_hw=yes 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # rdma_device_init 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # uname 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # allocate_nic_ips 00:22:07.442 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:07.702 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:22:07.702 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:07.702 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:07.702 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:07.702 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:07.702 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:07.702 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:07.702 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:07.702 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:07.702 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:07.702 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:22:07.702 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:07.702 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:07.702 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:07.702 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:07.702 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:07.702 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:07.702 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:22:07.702 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:07.702 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:22:07.702 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:07.702 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:07.702 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:07.702 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:07.702 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:22:07.702 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:22:07.702 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:22:07.702 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:07.702 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:22:07.702 altname enp24s0f0np0 00:22:07.702 altname ens785f0np0 00:22:07.702 inet 192.168.100.8/24 scope global mlx_0_0 00:22:07.702 valid_lft forever preferred_lft forever 00:22:07.702 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:22:07.703 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:07.703 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:22:07.703 altname enp24s0f1np1 00:22:07.703 altname ens785f1np1 00:22:07.703 inet 192.168.100.9/24 scope global mlx_0_1 00:22:07.703 valid_lft forever preferred_lft forever 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # return 0 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:22:07.703 192.168.100.9' 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:22:07.703 192.168.100.9' 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # head -n 1 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:22:07.703 192.168.100.9' 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # tail -n +2 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # head -n 1 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=3492658 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 3492658 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 3492658 ']' 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:07.703 18:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.642 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:08.642 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:22:08.642 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:08.642 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:08.642 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.642 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:08.642 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:22:08.642 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:22:08.642 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:22:08.643 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:08.643 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:22:08.643 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:22:08.643 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:22:08.643 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:08.643 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=22ed95e906ad2d468b109aa0537189f9 00:22:08.643 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:22:08.643 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.vKI 00:22:08.643 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 22ed95e906ad2d468b109aa0537189f9 0 00:22:08.643 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 22ed95e906ad2d468b109aa0537189f9 0 00:22:08.643 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:22:08.643 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:22:08.643 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=22ed95e906ad2d468b109aa0537189f9 00:22:08.643 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:22:08.643 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:22:08.903 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.vKI 00:22:08.903 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.vKI 00:22:08.903 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.vKI 00:22:08.903 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:22:08.903 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:22:08.903 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:08.903 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:22:08.903 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:22:08.903 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:22:08.903 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:08.903 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=b327fc39d4e192f36c1c9dc18b172b30d608863a1f50d5c303abf5f04d48e679 00:22:08.903 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:22:08.903 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.d3y 00:22:08.903 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key b327fc39d4e192f36c1c9dc18b172b30d608863a1f50d5c303abf5f04d48e679 3 00:22:08.903 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 b327fc39d4e192f36c1c9dc18b172b30d608863a1f50d5c303abf5f04d48e679 3 00:22:08.903 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:22:08.903 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:22:08.903 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=b327fc39d4e192f36c1c9dc18b172b30d608863a1f50d5c303abf5f04d48e679 00:22:08.903 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:22:08.903 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:22:08.903 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.d3y 00:22:08.903 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.d3y 00:22:08.903 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.d3y 00:22:08.903 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:22:08.903 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:22:08.903 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:08.903 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:22:08.903 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:22:08.903 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:22:08.903 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:08.903 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=0879c182a90700d4383ed42eb941c6efdbcbe3c2ab9d2545 00:22:08.903 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:22:08.903 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.GdQ 00:22:08.903 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 0879c182a90700d4383ed42eb941c6efdbcbe3c2ab9d2545 0 00:22:08.903 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 0879c182a90700d4383ed42eb941c6efdbcbe3c2ab9d2545 0 00:22:08.903 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:22:08.903 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:22:08.903 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=0879c182a90700d4383ed42eb941c6efdbcbe3c2ab9d2545 00:22:08.903 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:22:08.903 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:22:08.903 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.GdQ 00:22:08.904 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.GdQ 00:22:08.904 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.GdQ 00:22:08.904 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:22:08.904 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:22:08.904 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:08.904 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:22:08.904 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:22:08.904 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:22:08.904 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:08.904 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=b892577f718ce5e5eb45d0cfae683bdc81497dace1453fcf 00:22:08.904 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:22:08.904 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.owI 00:22:08.904 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key b892577f718ce5e5eb45d0cfae683bdc81497dace1453fcf 2 00:22:08.904 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 b892577f718ce5e5eb45d0cfae683bdc81497dace1453fcf 2 00:22:08.904 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:22:08.904 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:22:08.904 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=b892577f718ce5e5eb45d0cfae683bdc81497dace1453fcf 00:22:08.904 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:22:08.904 18:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:22:08.904 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.owI 00:22:08.904 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.owI 00:22:08.904 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.owI 00:22:08.904 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:22:08.904 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:22:08.904 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:08.904 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:22:08.904 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:22:08.904 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:22:08.904 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:08.904 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=5b9fc9d9239efc94feb69046bcd9355e 00:22:08.904 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:22:08.904 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.Ot2 00:22:08.904 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 5b9fc9d9239efc94feb69046bcd9355e 1 00:22:08.904 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 5b9fc9d9239efc94feb69046bcd9355e 1 00:22:08.904 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:22:08.904 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:22:08.904 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=5b9fc9d9239efc94feb69046bcd9355e 00:22:08.904 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:22:08.904 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.Ot2 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.Ot2 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Ot2 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=8f2fdc3f65fc62237f5675976ce7c680 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.OBO 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 8f2fdc3f65fc62237f5675976ce7c680 1 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 8f2fdc3f65fc62237f5675976ce7c680 1 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=8f2fdc3f65fc62237f5675976ce7c680 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.OBO 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.OBO 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.OBO 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=b54ff2413db1db0bd6843aae7fa4eafb15d4aaaacc2e9a6d 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.8HK 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key b54ff2413db1db0bd6843aae7fa4eafb15d4aaaacc2e9a6d 2 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 b54ff2413db1db0bd6843aae7fa4eafb15d4aaaacc2e9a6d 2 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=b54ff2413db1db0bd6843aae7fa4eafb15d4aaaacc2e9a6d 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.8HK 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.8HK 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.8HK 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=7c63aeb118a56c6c40a2c1f53ac6a16d 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.Vik 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 7c63aeb118a56c6c40a2c1f53ac6a16d 0 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 7c63aeb118a56c6c40a2c1f53ac6a16d 0 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=7c63aeb118a56c6c40a2c1f53ac6a16d 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.Vik 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.Vik 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Vik 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=5bc8e9b5aca53e23a1e180919de32f2f060b5b912de08dc0c7c473bff998a558 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.UED 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 5bc8e9b5aca53e23a1e180919de32f2f060b5b912de08dc0c7c473bff998a558 3 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 5bc8e9b5aca53e23a1e180919de32f2f060b5b912de08dc0c7c473bff998a558 3 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=5bc8e9b5aca53e23a1e180919de32f2f060b5b912de08dc0c7c473bff998a558 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:22:09.164 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:22:09.423 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.UED 00:22:09.423 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.UED 00:22:09.424 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.UED 00:22:09.424 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:22:09.424 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3492658 00:22:09.424 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 3492658 ']' 00:22:09.424 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.424 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:09.424 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:09.424 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:09.424 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.424 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:09.424 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:22:09.424 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:09.424 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.vKI 00:22:09.424 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.424 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.424 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.424 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.d3y ]] 00:22:09.424 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.d3y 00:22:09.424 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.424 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.424 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.424 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:09.424 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.GdQ 00:22:09.424 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.424 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.424 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.424 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.owI ]] 00:22:09.424 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.owI 00:22:09.424 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.424 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Ot2 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.OBO ]] 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.OBO 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.8HK 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Vik ]] 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Vik 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.UED 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:22:09.683 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:09.684 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:09.684 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:09.684 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:22:09.684 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:22:09.684 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:22:09.684 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:09.684 18:27:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:22:13.074 Waiting for block devices as requested 00:22:13.074 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:22:13.074 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:22:13.074 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:22:13.074 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:22:13.074 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:22:13.336 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:22:13.336 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:22:13.336 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:22:13.597 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:22:13.597 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:22:13.597 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:22:13.857 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:22:13.857 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:22:13.857 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:22:14.116 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:22:14.116 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:22:14.116 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:22:15.494 18:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:22:15.494 18:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:15.494 18:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:22:15.494 18:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:22:15.494 18:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:15.494 18:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:15.494 18:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:22:15.494 18:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:22:15.494 18:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:15.494 No valid GPT data, bailing 00:22:15.494 18:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:15.494 18:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:22:15.494 18:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:22:15.494 18:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:22:15.494 18:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:22:15.495 18:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:15.495 18:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:15.495 18:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:15.495 18:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:22:15.495 18:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:22:15.495 18:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:22:15.495 18:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:22:15.495 18:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 192.168.100.8 00:22:15.495 18:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo rdma 00:22:15.495 18:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:22:15.821 18:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:22:15.821 18:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:15.821 18:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:22:16.430 00:22:16.430 Discovery Log Number of Records 2, Generation counter 2 00:22:16.430 =====Discovery Log Entry 0====== 00:22:16.430 trtype: rdma 00:22:16.430 adrfam: ipv4 00:22:16.430 subtype: current discovery subsystem 00:22:16.430 treq: not specified, sq flow control disable supported 00:22:16.430 portid: 1 00:22:16.430 trsvcid: 4420 00:22:16.430 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:16.430 traddr: 192.168.100.8 00:22:16.430 eflags: none 00:22:16.430 rdma_prtype: not specified 00:22:16.430 rdma_qptype: connected 00:22:16.430 rdma_cms: rdma-cm 00:22:16.430 rdma_pkey: 0x0000 00:22:16.430 =====Discovery Log Entry 1====== 00:22:16.430 trtype: rdma 00:22:16.430 adrfam: ipv4 00:22:16.430 subtype: nvme subsystem 00:22:16.430 treq: not specified, sq flow control disable supported 00:22:16.430 portid: 1 00:22:16.430 trsvcid: 4420 00:22:16.430 subnqn: nqn.2024-02.io.spdk:cnode0 00:22:16.430 traddr: 192.168.100.8 00:22:16.430 eflags: none 00:22:16.430 rdma_prtype: not specified 00:22:16.430 rdma_qptype: connected 00:22:16.430 rdma_cms: rdma-cm 00:22:16.430 rdma_pkey: 0x0000 00:22:16.430 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:16.430 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:22:16.430 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:22:16.430 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:16.430 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:16.430 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:16.430 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:16.430 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:16.430 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg3OWMxODJhOTA3MDBkNDM4M2VkNDJlYjk0MWM2ZWZkYmNiZTNjMmFiOWQyNTQ1OFqiIw==: 00:22:16.430 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: 00:22:16.430 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:16.430 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:16.430 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg3OWMxODJhOTA3MDBkNDM4M2VkNDJlYjk0MWM2ZWZkYmNiZTNjMmFiOWQyNTQ1OFqiIw==: 00:22:16.430 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: ]] 00:22:16.430 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: 00:22:16.430 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:22:16.430 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:22:16.430 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:22:16.430 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:16.430 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:22:16.430 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:16.430 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:22:16.430 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:16.430 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:16.430 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:16.430 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:16.430 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.430 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.430 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.430 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:16.430 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:16.430 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:16.430 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:16.430 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:16.431 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:16.431 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:16.431 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:16.431 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:16.431 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:16.431 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:16.431 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.431 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.431 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.691 nvme0n1 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlZDk1ZTkwNmFkMmQ0NjhiMTA5YWEwNTM3MTg5Zjn7Gtn3: 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlZDk1ZTkwNmFkMmQ0NjhiMTA5YWEwNTM3MTg5Zjn7Gtn3: 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: ]] 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.691 18:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.951 nvme0n1 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg3OWMxODJhOTA3MDBkNDM4M2VkNDJlYjk0MWM2ZWZkYmNiZTNjMmFiOWQyNTQ1OFqiIw==: 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg3OWMxODJhOTA3MDBkNDM4M2VkNDJlYjk0MWM2ZWZkYmNiZTNjMmFiOWQyNTQ1OFqiIw==: 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: ]] 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.211 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.471 nvme0n1 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWI5ZmM5ZDkyMzllZmM5NGZlYjY5MDQ2YmNkOTM1NWUozsfy: 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWI5ZmM5ZDkyMzllZmM5NGZlYjY5MDQ2YmNkOTM1NWUozsfy: 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: ]] 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.471 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.041 nvme0n1 00:22:18.041 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.041 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:18.041 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:18.041 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.041 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.041 18:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjU0ZmYyNDEzZGIxZGIwYmQ2ODQzYWFlN2ZhNGVhZmIxNWQ0YWFhYWNjMmU5YTZkcuBM4A==: 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjU0ZmYyNDEzZGIxZGIwYmQ2ODQzYWFlN2ZhNGVhZmIxNWQ0YWFhYWNjMmU5YTZkcuBM4A==: 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: ]] 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.041 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.301 nvme0n1 00:22:18.301 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.301 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:18.301 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:18.301 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.301 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.301 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.301 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.301 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:18.301 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.301 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.301 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.301 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:18.301 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:22:18.301 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:18.301 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:18.301 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:18.301 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:18.301 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWJjOGU5YjVhY2E1M2UyM2ExZTE4MDkxOWRlMzJmMmYwNjBiNWI5MTJkZTA4ZGMwYzdjNDczYmZmOTk4YTU1OFyskjo=: 00:22:18.301 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:18.301 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:18.301 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:18.301 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWJjOGU5YjVhY2E1M2UyM2ExZTE4MDkxOWRlMzJmMmYwNjBiNWI5MTJkZTA4ZGMwYzdjNDczYmZmOTk4YTU1OFyskjo=: 00:22:18.301 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:18.301 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:22:18.301 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:18.301 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:18.301 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:18.301 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:18.301 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:18.301 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:18.301 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.301 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.301 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.301 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:18.301 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:18.561 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:18.561 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:18.561 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:18.561 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:18.561 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:18.561 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:18.561 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:18.561 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:18.561 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:18.561 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:18.561 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.561 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.820 nvme0n1 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlZDk1ZTkwNmFkMmQ0NjhiMTA5YWEwNTM3MTg5Zjn7Gtn3: 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlZDk1ZTkwNmFkMmQ0NjhiMTA5YWEwNTM3MTg5Zjn7Gtn3: 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: ]] 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.820 18:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.079 nvme0n1 00:22:19.079 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.079 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:19.079 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:19.079 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.079 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.339 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.339 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.339 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:19.339 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.339 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.339 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.339 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:19.339 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:22:19.339 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:19.339 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:19.339 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:19.339 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:19.339 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg3OWMxODJhOTA3MDBkNDM4M2VkNDJlYjk0MWM2ZWZkYmNiZTNjMmFiOWQyNTQ1OFqiIw==: 00:22:19.339 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: 00:22:19.339 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:19.339 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:19.339 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg3OWMxODJhOTA3MDBkNDM4M2VkNDJlYjk0MWM2ZWZkYmNiZTNjMmFiOWQyNTQ1OFqiIw==: 00:22:19.339 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: ]] 00:22:19.340 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: 00:22:19.340 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:22:19.340 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:19.340 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:19.340 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:19.340 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:19.340 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:19.340 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:19.340 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.340 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.340 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.340 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:19.340 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:19.340 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:19.340 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:19.340 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:19.340 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:19.340 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:19.340 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:19.340 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:19.340 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:19.340 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:19.340 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:19.340 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.340 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.599 nvme0n1 00:22:19.599 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.599 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:19.599 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:19.599 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.599 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.599 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.599 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.599 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:19.599 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.599 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.599 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.599 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:19.599 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:22:19.599 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:19.599 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:19.599 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:19.599 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:19.599 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWI5ZmM5ZDkyMzllZmM5NGZlYjY5MDQ2YmNkOTM1NWUozsfy: 00:22:19.599 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: 00:22:19.599 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:19.599 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:19.599 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWI5ZmM5ZDkyMzllZmM5NGZlYjY5MDQ2YmNkOTM1NWUozsfy: 00:22:19.599 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: ]] 00:22:19.599 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: 00:22:19.599 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:22:19.599 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:19.599 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:19.600 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:19.600 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:19.600 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:19.600 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:19.600 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.600 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.600 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.600 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:19.600 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:19.600 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:19.600 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:19.600 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:19.600 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:19.600 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:19.600 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:19.600 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:19.600 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:19.600 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:19.600 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:19.600 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.600 18:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.169 nvme0n1 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjU0ZmYyNDEzZGIxZGIwYmQ2ODQzYWFlN2ZhNGVhZmIxNWQ0YWFhYWNjMmU5YTZkcuBM4A==: 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjU0ZmYyNDEzZGIxZGIwYmQ2ODQzYWFlN2ZhNGVhZmIxNWQ0YWFhYWNjMmU5YTZkcuBM4A==: 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: ]] 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.169 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.429 nvme0n1 00:22:20.429 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.429 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:20.429 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:20.429 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.429 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.429 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.429 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.429 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:20.429 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.429 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.689 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.689 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:20.689 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:22:20.689 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:20.689 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:20.689 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:20.689 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:20.689 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWJjOGU5YjVhY2E1M2UyM2ExZTE4MDkxOWRlMzJmMmYwNjBiNWI5MTJkZTA4ZGMwYzdjNDczYmZmOTk4YTU1OFyskjo=: 00:22:20.689 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:20.689 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:20.689 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:20.689 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWJjOGU5YjVhY2E1M2UyM2ExZTE4MDkxOWRlMzJmMmYwNjBiNWI5MTJkZTA4ZGMwYzdjNDczYmZmOTk4YTU1OFyskjo=: 00:22:20.689 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:20.689 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:22:20.689 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:20.689 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:20.689 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:20.689 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:20.689 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:20.689 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:20.689 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.689 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.689 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.689 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:20.689 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:20.689 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:20.689 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:20.689 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:20.689 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:20.689 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:20.689 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:20.689 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:20.689 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:20.689 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:20.689 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:20.689 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.689 18:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.949 nvme0n1 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlZDk1ZTkwNmFkMmQ0NjhiMTA5YWEwNTM3MTg5Zjn7Gtn3: 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlZDk1ZTkwNmFkMmQ0NjhiMTA5YWEwNTM3MTg5Zjn7Gtn3: 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: ]] 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.949 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.518 nvme0n1 00:22:21.518 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.518 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:21.518 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg3OWMxODJhOTA3MDBkNDM4M2VkNDJlYjk0MWM2ZWZkYmNiZTNjMmFiOWQyNTQ1OFqiIw==: 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg3OWMxODJhOTA3MDBkNDM4M2VkNDJlYjk0MWM2ZWZkYmNiZTNjMmFiOWQyNTQ1OFqiIw==: 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: ]] 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.519 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.088 nvme0n1 00:22:22.088 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.088 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.088 18:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWI5ZmM5ZDkyMzllZmM5NGZlYjY5MDQ2YmNkOTM1NWUozsfy: 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWI5ZmM5ZDkyMzllZmM5NGZlYjY5MDQ2YmNkOTM1NWUozsfy: 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: ]] 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.088 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.348 nvme0n1 00:22:22.348 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.348 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:22.348 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.348 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:22.348 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.348 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjU0ZmYyNDEzZGIxZGIwYmQ2ODQzYWFlN2ZhNGVhZmIxNWQ0YWFhYWNjMmU5YTZkcuBM4A==: 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjU0ZmYyNDEzZGIxZGIwYmQ2ODQzYWFlN2ZhNGVhZmIxNWQ0YWFhYWNjMmU5YTZkcuBM4A==: 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: ]] 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.607 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.866 nvme0n1 00:22:22.866 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.866 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:22.866 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.866 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:22.866 18:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.866 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.866 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.866 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:22.866 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.866 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.125 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.125 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:23.125 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:22:23.125 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:23.125 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:23.125 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:23.125 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:23.125 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWJjOGU5YjVhY2E1M2UyM2ExZTE4MDkxOWRlMzJmMmYwNjBiNWI5MTJkZTA4ZGMwYzdjNDczYmZmOTk4YTU1OFyskjo=: 00:22:23.125 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:23.125 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:23.125 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:23.125 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWJjOGU5YjVhY2E1M2UyM2ExZTE4MDkxOWRlMzJmMmYwNjBiNWI5MTJkZTA4ZGMwYzdjNDczYmZmOTk4YTU1OFyskjo=: 00:22:23.125 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:23.125 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:22:23.125 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:23.125 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:23.125 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:23.126 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:23.126 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:23.126 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:23.126 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.126 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.126 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.126 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:23.126 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:23.126 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:23.126 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:23.126 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:23.126 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:23.126 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:23.126 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:23.126 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:23.126 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:23.126 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:23.126 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:23.126 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.126 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.385 nvme0n1 00:22:23.385 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.385 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:23.385 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:23.385 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.385 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.385 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.385 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.385 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:23.385 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.385 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.385 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.385 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:23.385 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:23.385 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:22:23.385 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:23.385 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:23.385 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:23.385 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:23.385 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlZDk1ZTkwNmFkMmQ0NjhiMTA5YWEwNTM3MTg5Zjn7Gtn3: 00:22:23.385 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: 00:22:23.385 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:23.385 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:23.385 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlZDk1ZTkwNmFkMmQ0NjhiMTA5YWEwNTM3MTg5Zjn7Gtn3: 00:22:23.385 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: ]] 00:22:23.385 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: 00:22:23.385 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:22:23.385 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:23.385 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:23.385 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:23.385 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:23.385 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:23.385 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:23.385 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.385 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.644 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.644 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:23.644 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:23.644 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:23.644 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:23.645 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:23.645 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:23.645 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:23.645 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:23.645 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:23.645 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:23.645 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:23.645 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:23.645 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.645 18:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.214 nvme0n1 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg3OWMxODJhOTA3MDBkNDM4M2VkNDJlYjk0MWM2ZWZkYmNiZTNjMmFiOWQyNTQ1OFqiIw==: 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg3OWMxODJhOTA3MDBkNDM4M2VkNDJlYjk0MWM2ZWZkYmNiZTNjMmFiOWQyNTQ1OFqiIw==: 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: ]] 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.214 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.782 nvme0n1 00:22:24.782 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.782 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:24.782 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:24.782 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.782 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWI5ZmM5ZDkyMzllZmM5NGZlYjY5MDQ2YmNkOTM1NWUozsfy: 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWI5ZmM5ZDkyMzllZmM5NGZlYjY5MDQ2YmNkOTM1NWUozsfy: 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: ]] 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.783 18:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.355 nvme0n1 00:22:25.355 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.355 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:25.355 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:25.355 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.355 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.355 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.355 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.355 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:25.355 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.355 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.355 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.355 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:25.355 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:22:25.355 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:25.356 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:25.356 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:25.356 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:25.356 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjU0ZmYyNDEzZGIxZGIwYmQ2ODQzYWFlN2ZhNGVhZmIxNWQ0YWFhYWNjMmU5YTZkcuBM4A==: 00:22:25.356 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: 00:22:25.356 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:25.356 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:25.356 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjU0ZmYyNDEzZGIxZGIwYmQ2ODQzYWFlN2ZhNGVhZmIxNWQ0YWFhYWNjMmU5YTZkcuBM4A==: 00:22:25.356 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: ]] 00:22:25.356 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: 00:22:25.356 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:22:25.356 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:25.356 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:25.356 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:25.356 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:25.356 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:25.356 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:25.356 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.356 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.356 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.356 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:25.356 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:25.356 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:25.356 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:25.356 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:25.356 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:25.356 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:25.356 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:25.356 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:25.356 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:25.356 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:25.356 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:25.356 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.356 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.926 nvme0n1 00:22:25.926 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.926 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:25.926 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.926 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:25.926 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.926 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.926 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.926 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:25.926 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.926 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.926 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.926 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:25.926 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:22:25.926 18:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:25.926 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:25.926 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:25.926 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:25.926 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWJjOGU5YjVhY2E1M2UyM2ExZTE4MDkxOWRlMzJmMmYwNjBiNWI5MTJkZTA4ZGMwYzdjNDczYmZmOTk4YTU1OFyskjo=: 00:22:25.926 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:25.926 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:25.926 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:25.926 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWJjOGU5YjVhY2E1M2UyM2ExZTE4MDkxOWRlMzJmMmYwNjBiNWI5MTJkZTA4ZGMwYzdjNDczYmZmOTk4YTU1OFyskjo=: 00:22:25.926 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:25.926 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:22:25.926 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:25.926 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:25.926 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:25.926 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:25.926 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:25.926 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:25.926 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.926 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.926 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.926 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:25.926 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:25.926 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:25.926 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:25.926 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:25.926 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:25.926 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:25.926 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:25.926 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:25.926 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:25.926 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:25.926 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:25.926 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.926 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.496 nvme0n1 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlZDk1ZTkwNmFkMmQ0NjhiMTA5YWEwNTM3MTg5Zjn7Gtn3: 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlZDk1ZTkwNmFkMmQ0NjhiMTA5YWEwNTM3MTg5Zjn7Gtn3: 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: ]] 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.496 18:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:27.434 nvme0n1 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg3OWMxODJhOTA3MDBkNDM4M2VkNDJlYjk0MWM2ZWZkYmNiZTNjMmFiOWQyNTQ1OFqiIw==: 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg3OWMxODJhOTA3MDBkNDM4M2VkNDJlYjk0MWM2ZWZkYmNiZTNjMmFiOWQyNTQ1OFqiIw==: 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: ]] 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.434 18:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.003 nvme0n1 00:22:28.004 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.004 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:28.004 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.004 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:28.004 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWI5ZmM5ZDkyMzllZmM5NGZlYjY5MDQ2YmNkOTM1NWUozsfy: 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWI5ZmM5ZDkyMzllZmM5NGZlYjY5MDQ2YmNkOTM1NWUozsfy: 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: ]] 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.263 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.831 nvme0n1 00:22:28.831 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.831 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:28.831 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:28.831 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.831 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.831 18:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjU0ZmYyNDEzZGIxZGIwYmQ2ODQzYWFlN2ZhNGVhZmIxNWQ0YWFhYWNjMmU5YTZkcuBM4A==: 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjU0ZmYyNDEzZGIxZGIwYmQ2ODQzYWFlN2ZhNGVhZmIxNWQ0YWFhYWNjMmU5YTZkcuBM4A==: 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: ]] 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.091 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.660 nvme0n1 00:22:29.660 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.660 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:29.660 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.660 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:29.660 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.660 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.660 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.660 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:29.660 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.660 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.919 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.919 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:29.919 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:22:29.919 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:29.919 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:29.919 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:29.919 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:29.919 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWJjOGU5YjVhY2E1M2UyM2ExZTE4MDkxOWRlMzJmMmYwNjBiNWI5MTJkZTA4ZGMwYzdjNDczYmZmOTk4YTU1OFyskjo=: 00:22:29.919 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:29.919 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:29.919 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:29.919 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWJjOGU5YjVhY2E1M2UyM2ExZTE4MDkxOWRlMzJmMmYwNjBiNWI5MTJkZTA4ZGMwYzdjNDczYmZmOTk4YTU1OFyskjo=: 00:22:29.919 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:29.919 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:22:29.919 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:29.919 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:29.919 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:29.920 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:29.920 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:29.920 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:29.920 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.920 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.920 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.920 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:29.920 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:29.920 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:29.920 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:29.920 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:29.920 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:29.920 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:29.920 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:29.920 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:29.920 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:29.920 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:29.920 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:29.920 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.920 18:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.488 nvme0n1 00:22:30.488 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.488 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:30.488 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:30.489 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.489 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.489 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.489 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.489 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:30.489 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.489 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.489 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.489 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:30.489 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:30.489 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:30.489 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:22:30.489 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:30.489 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:30.489 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:30.489 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:30.489 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlZDk1ZTkwNmFkMmQ0NjhiMTA5YWEwNTM3MTg5Zjn7Gtn3: 00:22:30.489 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: 00:22:30.489 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:30.489 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:30.489 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlZDk1ZTkwNmFkMmQ0NjhiMTA5YWEwNTM3MTg5Zjn7Gtn3: 00:22:30.489 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: ]] 00:22:30.489 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: 00:22:30.489 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:22:30.489 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:30.489 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:30.489 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:30.489 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:30.489 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:30.489 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:30.489 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.489 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.748 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.748 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:30.748 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:30.748 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:30.748 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:30.748 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:30.748 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:30.748 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:30.748 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:30.748 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:30.748 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:30.748 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:30.748 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:30.748 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.748 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.008 nvme0n1 00:22:31.008 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.008 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:31.008 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:31.008 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.008 18:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg3OWMxODJhOTA3MDBkNDM4M2VkNDJlYjk0MWM2ZWZkYmNiZTNjMmFiOWQyNTQ1OFqiIw==: 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg3OWMxODJhOTA3MDBkNDM4M2VkNDJlYjk0MWM2ZWZkYmNiZTNjMmFiOWQyNTQ1OFqiIw==: 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: ]] 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.008 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.268 nvme0n1 00:22:31.268 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.268 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:31.268 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:31.268 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.268 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.268 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWI5ZmM5ZDkyMzllZmM5NGZlYjY5MDQ2YmNkOTM1NWUozsfy: 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWI5ZmM5ZDkyMzllZmM5NGZlYjY5MDQ2YmNkOTM1NWUozsfy: 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: ]] 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.528 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.788 nvme0n1 00:22:31.788 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.788 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:31.788 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:31.788 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.788 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.788 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.788 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.788 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:31.788 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.788 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.788 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.788 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:31.788 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:22:31.788 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:31.788 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:31.788 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:31.788 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:31.788 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjU0ZmYyNDEzZGIxZGIwYmQ2ODQzYWFlN2ZhNGVhZmIxNWQ0YWFhYWNjMmU5YTZkcuBM4A==: 00:22:31.788 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: 00:22:31.788 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:31.788 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:31.788 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjU0ZmYyNDEzZGIxZGIwYmQ2ODQzYWFlN2ZhNGVhZmIxNWQ0YWFhYWNjMmU5YTZkcuBM4A==: 00:22:31.788 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: ]] 00:22:31.788 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: 00:22:31.788 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:22:31.788 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:31.789 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:31.789 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:31.789 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:31.789 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:31.789 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:31.789 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.789 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.789 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.789 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:31.789 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:31.789 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:31.789 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:31.789 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:31.789 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:31.789 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:31.789 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:31.789 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:31.789 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:31.789 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:31.789 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:31.789 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.789 18:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.357 nvme0n1 00:22:32.357 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.357 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:32.357 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:32.357 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.357 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.357 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.357 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.357 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:32.357 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.357 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.357 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.357 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:32.357 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:22:32.357 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:32.357 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:32.357 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:32.357 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:32.357 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWJjOGU5YjVhY2E1M2UyM2ExZTE4MDkxOWRlMzJmMmYwNjBiNWI5MTJkZTA4ZGMwYzdjNDczYmZmOTk4YTU1OFyskjo=: 00:22:32.357 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:32.357 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:32.357 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:32.357 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWJjOGU5YjVhY2E1M2UyM2ExZTE4MDkxOWRlMzJmMmYwNjBiNWI5MTJkZTA4ZGMwYzdjNDczYmZmOTk4YTU1OFyskjo=: 00:22:32.357 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:32.357 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:22:32.357 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:32.357 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:32.357 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:32.357 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:32.357 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:32.357 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:32.357 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.357 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.357 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.357 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:32.357 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:32.357 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:32.357 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:32.357 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:32.357 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:32.358 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:32.358 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:32.358 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:32.358 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:32.358 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:32.358 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:32.358 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.358 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.617 nvme0n1 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlZDk1ZTkwNmFkMmQ0NjhiMTA5YWEwNTM3MTg5Zjn7Gtn3: 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlZDk1ZTkwNmFkMmQ0NjhiMTA5YWEwNTM3MTg5Zjn7Gtn3: 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: ]] 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.617 18:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.186 nvme0n1 00:22:33.186 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.186 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.186 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.186 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:33.186 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.186 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.186 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.186 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.186 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.186 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.186 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.186 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:33.186 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:22:33.186 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:33.186 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:33.186 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:33.186 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:33.186 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg3OWMxODJhOTA3MDBkNDM4M2VkNDJlYjk0MWM2ZWZkYmNiZTNjMmFiOWQyNTQ1OFqiIw==: 00:22:33.186 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: 00:22:33.186 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:33.186 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:33.186 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg3OWMxODJhOTA3MDBkNDM4M2VkNDJlYjk0MWM2ZWZkYmNiZTNjMmFiOWQyNTQ1OFqiIw==: 00:22:33.186 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: ]] 00:22:33.186 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: 00:22:33.186 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:22:33.186 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:33.186 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:33.186 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:33.186 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:33.186 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:33.187 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:33.187 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.187 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.187 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.187 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:33.187 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:33.187 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:33.187 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:33.187 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.187 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.187 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:33.187 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:33.187 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:33.187 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:33.187 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:33.187 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:33.187 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.187 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.446 nvme0n1 00:22:33.446 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.446 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.446 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.446 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:33.446 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.446 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.446 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.446 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.446 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.446 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.706 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.706 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:33.706 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:22:33.706 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:33.706 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:33.706 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:33.706 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:33.706 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWI5ZmM5ZDkyMzllZmM5NGZlYjY5MDQ2YmNkOTM1NWUozsfy: 00:22:33.706 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: 00:22:33.706 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:33.706 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:33.706 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWI5ZmM5ZDkyMzllZmM5NGZlYjY5MDQ2YmNkOTM1NWUozsfy: 00:22:33.706 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: ]] 00:22:33.706 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: 00:22:33.706 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:22:33.706 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:33.706 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:33.706 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:33.706 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:33.706 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:33.706 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:33.706 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.706 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.706 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.706 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:33.706 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:33.706 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:33.706 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:33.706 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.706 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.706 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:33.706 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:33.706 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:33.706 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:33.706 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:33.706 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:33.706 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.706 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.111 nvme0n1 00:22:34.111 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.111 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.111 18:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:34.111 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.111 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.111 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.111 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.111 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.111 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.111 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.111 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.111 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:34.111 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:22:34.111 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:34.111 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:34.111 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:34.111 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:34.112 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjU0ZmYyNDEzZGIxZGIwYmQ2ODQzYWFlN2ZhNGVhZmIxNWQ0YWFhYWNjMmU5YTZkcuBM4A==: 00:22:34.112 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: 00:22:34.112 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:34.112 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:34.112 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjU0ZmYyNDEzZGIxZGIwYmQ2ODQzYWFlN2ZhNGVhZmIxNWQ0YWFhYWNjMmU5YTZkcuBM4A==: 00:22:34.112 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: ]] 00:22:34.112 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: 00:22:34.112 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:22:34.112 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:34.112 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:34.112 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:34.112 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:34.112 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:34.112 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:34.112 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.112 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.112 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.112 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:34.112 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:34.112 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:34.112 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:34.112 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.112 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.112 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:34.112 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:34.112 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:34.112 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:34.112 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:34.112 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:34.112 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.112 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.438 nvme0n1 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWJjOGU5YjVhY2E1M2UyM2ExZTE4MDkxOWRlMzJmMmYwNjBiNWI5MTJkZTA4ZGMwYzdjNDczYmZmOTk4YTU1OFyskjo=: 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWJjOGU5YjVhY2E1M2UyM2ExZTE4MDkxOWRlMzJmMmYwNjBiNWI5MTJkZTA4ZGMwYzdjNDczYmZmOTk4YTU1OFyskjo=: 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.438 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.697 nvme0n1 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlZDk1ZTkwNmFkMmQ0NjhiMTA5YWEwNTM3MTg5Zjn7Gtn3: 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlZDk1ZTkwNmFkMmQ0NjhiMTA5YWEwNTM3MTg5Zjn7Gtn3: 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: ]] 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.957 18:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.217 nvme0n1 00:22:35.217 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.217 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:35.217 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.217 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:35.217 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.217 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg3OWMxODJhOTA3MDBkNDM4M2VkNDJlYjk0MWM2ZWZkYmNiZTNjMmFiOWQyNTQ1OFqiIw==: 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg3OWMxODJhOTA3MDBkNDM4M2VkNDJlYjk0MWM2ZWZkYmNiZTNjMmFiOWQyNTQ1OFqiIw==: 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: ]] 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.476 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.735 nvme0n1 00:22:35.735 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.735 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:35.735 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:35.735 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.735 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.735 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWI5ZmM5ZDkyMzllZmM5NGZlYjY5MDQ2YmNkOTM1NWUozsfy: 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWI5ZmM5ZDkyMzllZmM5NGZlYjY5MDQ2YmNkOTM1NWUozsfy: 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: ]] 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.994 18:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.253 nvme0n1 00:22:36.253 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.253 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:36.253 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.253 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.253 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:36.253 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.253 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.253 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:36.253 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.253 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.512 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.512 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:36.512 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:22:36.512 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:36.512 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:36.512 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:36.512 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:36.512 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjU0ZmYyNDEzZGIxZGIwYmQ2ODQzYWFlN2ZhNGVhZmIxNWQ0YWFhYWNjMmU5YTZkcuBM4A==: 00:22:36.512 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: 00:22:36.512 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:36.512 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:36.512 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjU0ZmYyNDEzZGIxZGIwYmQ2ODQzYWFlN2ZhNGVhZmIxNWQ0YWFhYWNjMmU5YTZkcuBM4A==: 00:22:36.512 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: ]] 00:22:36.512 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: 00:22:36.512 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:22:36.512 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:36.512 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:36.512 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:36.512 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:36.512 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:36.512 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:36.512 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.512 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.512 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.512 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:36.512 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:36.512 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:36.512 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:36.512 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:36.512 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:36.512 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:36.512 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:36.512 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:36.512 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:36.512 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:36.512 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:36.512 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.512 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.771 nvme0n1 00:22:36.771 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.771 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:36.771 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:36.771 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.771 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.771 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.771 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.771 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:36.771 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.771 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.771 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.771 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:36.771 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:22:36.771 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:36.771 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:36.771 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:36.771 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:36.771 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWJjOGU5YjVhY2E1M2UyM2ExZTE4MDkxOWRlMzJmMmYwNjBiNWI5MTJkZTA4ZGMwYzdjNDczYmZmOTk4YTU1OFyskjo=: 00:22:36.771 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:36.771 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:36.771 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:36.771 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWJjOGU5YjVhY2E1M2UyM2ExZTE4MDkxOWRlMzJmMmYwNjBiNWI5MTJkZTA4ZGMwYzdjNDczYmZmOTk4YTU1OFyskjo=: 00:22:36.771 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:36.771 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:22:36.771 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:36.771 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:36.771 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:36.771 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:36.771 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:36.772 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:36.772 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.772 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.772 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.772 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:36.772 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:36.772 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:36.772 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:36.772 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:36.772 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:36.772 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:36.772 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:36.772 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:36.772 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:36.772 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:37.030 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:37.030 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.030 18:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.289 nvme0n1 00:22:37.289 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.289 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:37.289 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.289 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:37.289 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.289 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.289 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.289 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:37.289 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.289 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.289 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.289 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:37.289 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:37.289 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:22:37.289 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:37.289 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:37.289 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:37.289 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:37.289 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlZDk1ZTkwNmFkMmQ0NjhiMTA5YWEwNTM3MTg5Zjn7Gtn3: 00:22:37.289 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: 00:22:37.289 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:37.289 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:37.289 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlZDk1ZTkwNmFkMmQ0NjhiMTA5YWEwNTM3MTg5Zjn7Gtn3: 00:22:37.289 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: ]] 00:22:37.289 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: 00:22:37.289 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:22:37.289 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:37.289 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:37.289 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:37.289 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:37.289 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:37.290 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:37.290 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.290 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.290 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.290 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:37.290 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:37.290 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:37.290 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:37.290 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:37.290 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:37.290 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:37.290 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:37.290 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:37.290 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:37.290 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:37.290 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:37.290 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.290 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.856 nvme0n1 00:22:37.856 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.856 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:37.856 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:37.856 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.856 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.856 18:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.856 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.856 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:37.856 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.856 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.116 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.116 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:38.116 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:22:38.116 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:38.116 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:38.116 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:38.116 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:38.116 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg3OWMxODJhOTA3MDBkNDM4M2VkNDJlYjk0MWM2ZWZkYmNiZTNjMmFiOWQyNTQ1OFqiIw==: 00:22:38.116 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: 00:22:38.116 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:38.116 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:38.116 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg3OWMxODJhOTA3MDBkNDM4M2VkNDJlYjk0MWM2ZWZkYmNiZTNjMmFiOWQyNTQ1OFqiIw==: 00:22:38.116 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: ]] 00:22:38.116 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: 00:22:38.116 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:22:38.116 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:38.116 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:38.116 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:38.116 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:38.116 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:38.116 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:38.116 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.116 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.116 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.116 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:38.116 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:38.116 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:38.116 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:38.116 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:38.116 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:38.116 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:38.116 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:38.116 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:38.116 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:38.116 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:38.116 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:38.116 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.116 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.684 nvme0n1 00:22:38.684 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.684 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:38.684 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:38.684 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.684 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.684 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.684 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.684 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:38.684 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.684 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.684 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.684 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:38.684 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:22:38.684 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:38.684 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:38.684 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:38.684 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:38.684 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWI5ZmM5ZDkyMzllZmM5NGZlYjY5MDQ2YmNkOTM1NWUozsfy: 00:22:38.684 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: 00:22:38.684 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:38.684 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:38.684 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWI5ZmM5ZDkyMzllZmM5NGZlYjY5MDQ2YmNkOTM1NWUozsfy: 00:22:38.684 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: ]] 00:22:38.684 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: 00:22:38.684 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:22:38.684 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:38.684 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:38.684 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:38.684 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:38.684 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:38.684 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:38.684 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.684 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.684 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.684 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:38.684 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:38.684 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:38.684 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:38.684 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:38.685 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:38.685 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:38.685 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:38.685 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:38.685 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:38.685 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:38.685 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.685 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.685 18:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.252 nvme0n1 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjU0ZmYyNDEzZGIxZGIwYmQ2ODQzYWFlN2ZhNGVhZmIxNWQ0YWFhYWNjMmU5YTZkcuBM4A==: 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjU0ZmYyNDEzZGIxZGIwYmQ2ODQzYWFlN2ZhNGVhZmIxNWQ0YWFhYWNjMmU5YTZkcuBM4A==: 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: ]] 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:39.252 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.253 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.821 nvme0n1 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWJjOGU5YjVhY2E1M2UyM2ExZTE4MDkxOWRlMzJmMmYwNjBiNWI5MTJkZTA4ZGMwYzdjNDczYmZmOTk4YTU1OFyskjo=: 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWJjOGU5YjVhY2E1M2UyM2ExZTE4MDkxOWRlMzJmMmYwNjBiNWI5MTJkZTA4ZGMwYzdjNDczYmZmOTk4YTU1OFyskjo=: 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:39.821 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.822 18:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.390 nvme0n1 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlZDk1ZTkwNmFkMmQ0NjhiMTA5YWEwNTM3MTg5Zjn7Gtn3: 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlZDk1ZTkwNmFkMmQ0NjhiMTA5YWEwNTM3MTg5Zjn7Gtn3: 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: ]] 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.390 18:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.327 nvme0n1 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg3OWMxODJhOTA3MDBkNDM4M2VkNDJlYjk0MWM2ZWZkYmNiZTNjMmFiOWQyNTQ1OFqiIw==: 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg3OWMxODJhOTA3MDBkNDM4M2VkNDJlYjk0MWM2ZWZkYmNiZTNjMmFiOWQyNTQ1OFqiIw==: 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: ]] 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:41.327 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.328 18:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.895 nvme0n1 00:22:41.895 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.895 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:41.895 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.895 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:41.895 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.895 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.895 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.895 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:41.895 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.895 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.154 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.154 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:42.154 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:22:42.154 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:42.154 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:42.154 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:42.154 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:42.154 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWI5ZmM5ZDkyMzllZmM5NGZlYjY5MDQ2YmNkOTM1NWUozsfy: 00:22:42.154 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: 00:22:42.154 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:42.154 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:42.154 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWI5ZmM5ZDkyMzllZmM5NGZlYjY5MDQ2YmNkOTM1NWUozsfy: 00:22:42.154 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: ]] 00:22:42.154 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: 00:22:42.154 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:22:42.154 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:42.154 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:42.154 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:42.154 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:42.154 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:42.154 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:42.154 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.154 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.154 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.154 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:42.154 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:42.154 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:42.154 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:42.154 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:42.154 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:42.154 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:42.154 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:42.154 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:42.154 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:42.154 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:42.155 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:42.155 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.155 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.722 nvme0n1 00:22:42.722 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.722 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:42.722 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:42.722 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.722 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.722 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.722 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.723 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:42.723 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.723 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.723 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.723 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:42.723 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:22:42.723 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:42.723 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:42.723 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:42.723 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:42.723 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjU0ZmYyNDEzZGIxZGIwYmQ2ODQzYWFlN2ZhNGVhZmIxNWQ0YWFhYWNjMmU5YTZkcuBM4A==: 00:22:42.982 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: 00:22:42.982 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:42.982 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:42.982 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjU0ZmYyNDEzZGIxZGIwYmQ2ODQzYWFlN2ZhNGVhZmIxNWQ0YWFhYWNjMmU5YTZkcuBM4A==: 00:22:42.982 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: ]] 00:22:42.982 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: 00:22:42.982 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:22:42.982 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:42.982 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:42.982 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:42.982 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:42.982 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:42.982 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:42.982 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.982 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.982 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.982 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:42.982 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:42.982 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:42.982 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:42.982 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:42.982 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:42.982 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:42.982 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:42.982 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:42.982 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:42.982 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:42.982 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:42.982 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.982 18:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.550 nvme0n1 00:22:43.550 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.550 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:43.550 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:43.550 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.550 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.550 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.550 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.550 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:43.550 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.550 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.550 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.550 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:43.550 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:22:43.550 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:43.550 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:43.550 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:43.550 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:43.550 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWJjOGU5YjVhY2E1M2UyM2ExZTE4MDkxOWRlMzJmMmYwNjBiNWI5MTJkZTA4ZGMwYzdjNDczYmZmOTk4YTU1OFyskjo=: 00:22:43.550 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:43.550 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:43.550 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:43.550 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWJjOGU5YjVhY2E1M2UyM2ExZTE4MDkxOWRlMzJmMmYwNjBiNWI5MTJkZTA4ZGMwYzdjNDczYmZmOTk4YTU1OFyskjo=: 00:22:43.550 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:43.550 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:22:43.550 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:43.550 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:43.550 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:43.550 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:43.550 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:43.550 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:43.550 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.550 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.550 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.550 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:43.808 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:43.808 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:43.808 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:43.808 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:43.808 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:43.808 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:43.808 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:43.808 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:43.808 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:43.808 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:43.809 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:43.809 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.809 18:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.376 nvme0n1 00:22:44.376 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.376 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:44.376 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.376 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:44.376 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.376 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.376 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.376 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:44.376 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.376 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.376 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.376 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:44.376 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:44.377 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:44.377 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:22:44.377 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:44.377 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:44.377 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:44.377 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:44.377 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlZDk1ZTkwNmFkMmQ0NjhiMTA5YWEwNTM3MTg5Zjn7Gtn3: 00:22:44.377 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: 00:22:44.377 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:44.377 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:44.377 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlZDk1ZTkwNmFkMmQ0NjhiMTA5YWEwNTM3MTg5Zjn7Gtn3: 00:22:44.377 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: ]] 00:22:44.377 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: 00:22:44.377 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:22:44.377 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:44.377 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:44.377 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:44.377 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:44.377 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:44.377 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:44.377 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.377 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.377 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.377 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:44.377 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:44.377 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:44.377 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:44.377 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:44.377 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:44.377 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:44.377 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:44.377 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:44.377 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:44.377 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:44.377 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:44.377 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.377 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.946 nvme0n1 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg3OWMxODJhOTA3MDBkNDM4M2VkNDJlYjk0MWM2ZWZkYmNiZTNjMmFiOWQyNTQ1OFqiIw==: 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg3OWMxODJhOTA3MDBkNDM4M2VkNDJlYjk0MWM2ZWZkYmNiZTNjMmFiOWQyNTQ1OFqiIw==: 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: ]] 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.946 18:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.205 nvme0n1 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWI5ZmM5ZDkyMzllZmM5NGZlYjY5MDQ2YmNkOTM1NWUozsfy: 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWI5ZmM5ZDkyMzllZmM5NGZlYjY5MDQ2YmNkOTM1NWUozsfy: 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: ]] 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.205 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.772 nvme0n1 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjU0ZmYyNDEzZGIxZGIwYmQ2ODQzYWFlN2ZhNGVhZmIxNWQ0YWFhYWNjMmU5YTZkcuBM4A==: 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjU0ZmYyNDEzZGIxZGIwYmQ2ODQzYWFlN2ZhNGVhZmIxNWQ0YWFhYWNjMmU5YTZkcuBM4A==: 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: ]] 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.772 18:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.032 nvme0n1 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWJjOGU5YjVhY2E1M2UyM2ExZTE4MDkxOWRlMzJmMmYwNjBiNWI5MTJkZTA4ZGMwYzdjNDczYmZmOTk4YTU1OFyskjo=: 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWJjOGU5YjVhY2E1M2UyM2ExZTE4MDkxOWRlMzJmMmYwNjBiNWI5MTJkZTA4ZGMwYzdjNDczYmZmOTk4YTU1OFyskjo=: 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.032 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.601 nvme0n1 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlZDk1ZTkwNmFkMmQ0NjhiMTA5YWEwNTM3MTg5Zjn7Gtn3: 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlZDk1ZTkwNmFkMmQ0NjhiMTA5YWEwNTM3MTg5Zjn7Gtn3: 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: ]] 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.601 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.860 nvme0n1 00:22:46.860 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.860 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:46.860 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:46.860 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.860 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.860 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.860 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.860 18:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:46.860 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.860 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.860 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.860 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:46.860 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:22:46.860 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:46.860 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:46.860 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:46.860 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:46.860 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg3OWMxODJhOTA3MDBkNDM4M2VkNDJlYjk0MWM2ZWZkYmNiZTNjMmFiOWQyNTQ1OFqiIw==: 00:22:46.860 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: 00:22:46.860 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:46.860 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:46.860 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg3OWMxODJhOTA3MDBkNDM4M2VkNDJlYjk0MWM2ZWZkYmNiZTNjMmFiOWQyNTQ1OFqiIw==: 00:22:46.860 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: ]] 00:22:46.860 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: 00:22:46.860 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:22:47.119 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:47.119 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:47.119 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:47.119 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:47.119 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:47.119 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:47.119 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.119 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.119 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.119 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:47.119 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:47.119 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:47.119 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:47.119 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:47.119 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:47.119 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:47.119 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:47.119 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:47.119 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:47.119 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:47.119 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:47.119 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.119 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.378 nvme0n1 00:22:47.378 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.378 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:47.378 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:47.378 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.378 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.378 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.378 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.378 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:47.379 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.379 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.379 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.379 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:47.379 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:22:47.379 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:47.379 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:47.379 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:47.379 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:47.379 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWI5ZmM5ZDkyMzllZmM5NGZlYjY5MDQ2YmNkOTM1NWUozsfy: 00:22:47.379 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: 00:22:47.379 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:47.379 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:47.379 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWI5ZmM5ZDkyMzllZmM5NGZlYjY5MDQ2YmNkOTM1NWUozsfy: 00:22:47.379 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: ]] 00:22:47.379 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: 00:22:47.379 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:22:47.379 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:47.379 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:47.379 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:47.379 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:47.379 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:47.379 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:47.379 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.379 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.379 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.379 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:47.379 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:47.379 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:47.379 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:47.379 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:47.379 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:47.379 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:47.379 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:47.379 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:47.379 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:47.379 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:47.379 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:47.379 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.379 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.947 nvme0n1 00:22:47.947 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.947 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:47.947 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.947 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:47.947 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.947 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.947 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.947 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:47.947 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.947 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.947 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.947 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:47.947 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:22:47.947 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:47.947 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:47.947 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:47.947 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:47.947 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjU0ZmYyNDEzZGIxZGIwYmQ2ODQzYWFlN2ZhNGVhZmIxNWQ0YWFhYWNjMmU5YTZkcuBM4A==: 00:22:47.947 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: 00:22:47.947 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:47.947 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:47.947 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjU0ZmYyNDEzZGIxZGIwYmQ2ODQzYWFlN2ZhNGVhZmIxNWQ0YWFhYWNjMmU5YTZkcuBM4A==: 00:22:47.947 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: ]] 00:22:47.947 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: 00:22:47.947 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:22:47.947 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:47.947 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:47.947 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:47.948 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:47.948 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:47.948 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:47.948 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.948 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.948 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.948 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:47.948 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:47.948 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:47.948 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:47.948 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:47.948 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:47.948 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:47.948 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:47.948 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:47.948 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:47.948 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:47.948 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:47.948 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.948 18:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.206 nvme0n1 00:22:48.206 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.206 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:48.206 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:48.206 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.206 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.206 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.206 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.206 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:48.206 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.206 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.207 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.207 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:48.207 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:22:48.207 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:48.207 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:48.207 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:48.207 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:48.207 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWJjOGU5YjVhY2E1M2UyM2ExZTE4MDkxOWRlMzJmMmYwNjBiNWI5MTJkZTA4ZGMwYzdjNDczYmZmOTk4YTU1OFyskjo=: 00:22:48.207 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:48.207 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:48.207 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:48.207 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWJjOGU5YjVhY2E1M2UyM2ExZTE4MDkxOWRlMzJmMmYwNjBiNWI5MTJkZTA4ZGMwYzdjNDczYmZmOTk4YTU1OFyskjo=: 00:22:48.207 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:48.207 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:22:48.207 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:48.207 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:48.207 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:48.207 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:48.207 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:48.207 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:48.207 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.207 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.207 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.207 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:48.207 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:48.207 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:48.207 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:48.207 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:48.207 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:48.207 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:48.207 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:48.207 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:48.207 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:48.207 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:48.207 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:48.207 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.207 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.774 nvme0n1 00:22:48.774 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.774 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:48.774 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:48.774 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.774 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlZDk1ZTkwNmFkMmQ0NjhiMTA5YWEwNTM3MTg5Zjn7Gtn3: 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlZDk1ZTkwNmFkMmQ0NjhiMTA5YWEwNTM3MTg5Zjn7Gtn3: 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: ]] 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.775 18:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.343 nvme0n1 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg3OWMxODJhOTA3MDBkNDM4M2VkNDJlYjk0MWM2ZWZkYmNiZTNjMmFiOWQyNTQ1OFqiIw==: 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg3OWMxODJhOTA3MDBkNDM4M2VkNDJlYjk0MWM2ZWZkYmNiZTNjMmFiOWQyNTQ1OFqiIw==: 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: ]] 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.343 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.601 nvme0n1 00:22:49.601 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.601 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:49.601 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:49.601 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.601 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.601 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.601 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.601 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:49.601 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.601 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.860 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.860 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:49.860 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:22:49.860 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:49.860 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:49.860 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:49.860 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:49.860 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWI5ZmM5ZDkyMzllZmM5NGZlYjY5MDQ2YmNkOTM1NWUozsfy: 00:22:49.860 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: 00:22:49.860 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:49.860 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:49.860 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWI5ZmM5ZDkyMzllZmM5NGZlYjY5MDQ2YmNkOTM1NWUozsfy: 00:22:49.860 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: ]] 00:22:49.860 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: 00:22:49.860 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:22:49.860 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:49.860 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:49.860 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:49.860 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:49.860 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:49.860 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:49.860 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.860 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.860 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.860 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:49.860 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:49.860 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:49.860 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:49.860 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:49.860 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:49.860 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:49.860 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:49.860 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:49.860 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:49.860 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:49.860 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:49.860 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.860 18:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.119 nvme0n1 00:22:50.119 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.119 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:50.119 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:50.120 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.120 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.120 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.120 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.120 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:50.120 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.120 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.120 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.120 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:50.120 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:22:50.120 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:50.120 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:50.120 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:50.120 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:50.120 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjU0ZmYyNDEzZGIxZGIwYmQ2ODQzYWFlN2ZhNGVhZmIxNWQ0YWFhYWNjMmU5YTZkcuBM4A==: 00:22:50.120 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: 00:22:50.120 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:50.120 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:50.120 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjU0ZmYyNDEzZGIxZGIwYmQ2ODQzYWFlN2ZhNGVhZmIxNWQ0YWFhYWNjMmU5YTZkcuBM4A==: 00:22:50.120 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: ]] 00:22:50.120 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: 00:22:50.120 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:22:50.120 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:50.120 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:50.120 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:50.120 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:50.120 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:50.120 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:50.120 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.120 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.379 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.379 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:50.379 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:50.379 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:50.379 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:50.379 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:50.379 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:50.379 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:50.379 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:50.379 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:50.379 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:50.379 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:50.379 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:50.379 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.379 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.638 nvme0n1 00:22:50.638 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWJjOGU5YjVhY2E1M2UyM2ExZTE4MDkxOWRlMzJmMmYwNjBiNWI5MTJkZTA4ZGMwYzdjNDczYmZmOTk4YTU1OFyskjo=: 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWJjOGU5YjVhY2E1M2UyM2ExZTE4MDkxOWRlMzJmMmYwNjBiNWI5MTJkZTA4ZGMwYzdjNDczYmZmOTk4YTU1OFyskjo=: 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.639 18:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.208 nvme0n1 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlZDk1ZTkwNmFkMmQ0NjhiMTA5YWEwNTM3MTg5Zjn7Gtn3: 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlZDk1ZTkwNmFkMmQ0NjhiMTA5YWEwNTM3MTg5Zjn7Gtn3: 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: ]] 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.208 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.776 nvme0n1 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg3OWMxODJhOTA3MDBkNDM4M2VkNDJlYjk0MWM2ZWZkYmNiZTNjMmFiOWQyNTQ1OFqiIw==: 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg3OWMxODJhOTA3MDBkNDM4M2VkNDJlYjk0MWM2ZWZkYmNiZTNjMmFiOWQyNTQ1OFqiIw==: 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: ]] 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.776 18:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.344 nvme0n1 00:22:52.344 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.344 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:52.344 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:52.344 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.344 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.344 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.344 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.344 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:52.344 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.344 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.344 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.344 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:52.344 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:22:52.344 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:52.344 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:52.344 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:52.344 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:52.344 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWI5ZmM5ZDkyMzllZmM5NGZlYjY5MDQ2YmNkOTM1NWUozsfy: 00:22:52.345 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: 00:22:52.345 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:52.345 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:52.345 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWI5ZmM5ZDkyMzllZmM5NGZlYjY5MDQ2YmNkOTM1NWUozsfy: 00:22:52.345 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: ]] 00:22:52.345 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: 00:22:52.345 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:22:52.345 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:52.345 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:52.345 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:52.345 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:52.345 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:52.345 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:52.345 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.345 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.345 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.345 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:52.345 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:52.345 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:52.345 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:52.345 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:52.345 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:52.345 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:52.345 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:52.345 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:52.345 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:52.345 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:52.345 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:52.345 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.345 18:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.913 nvme0n1 00:22:52.913 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.913 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:52.913 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:52.913 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.913 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.913 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.913 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.913 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:52.913 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.913 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.172 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.172 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:53.172 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:22:53.172 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:53.172 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:53.172 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:53.172 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:53.172 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjU0ZmYyNDEzZGIxZGIwYmQ2ODQzYWFlN2ZhNGVhZmIxNWQ0YWFhYWNjMmU5YTZkcuBM4A==: 00:22:53.172 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: 00:22:53.172 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:53.172 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:53.172 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjU0ZmYyNDEzZGIxZGIwYmQ2ODQzYWFlN2ZhNGVhZmIxNWQ0YWFhYWNjMmU5YTZkcuBM4A==: 00:22:53.172 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: ]] 00:22:53.172 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: 00:22:53.172 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:22:53.172 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:53.172 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:53.172 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:53.172 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:53.172 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:53.172 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:53.172 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.172 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.172 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.172 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:53.172 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:53.172 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:53.172 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:53.172 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:53.172 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:53.172 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:53.172 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:53.172 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:53.172 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:53.172 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:53.172 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:53.172 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.172 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.741 nvme0n1 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWJjOGU5YjVhY2E1M2UyM2ExZTE4MDkxOWRlMzJmMmYwNjBiNWI5MTJkZTA4ZGMwYzdjNDczYmZmOTk4YTU1OFyskjo=: 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWJjOGU5YjVhY2E1M2UyM2ExZTE4MDkxOWRlMzJmMmYwNjBiNWI5MTJkZTA4ZGMwYzdjNDczYmZmOTk4YTU1OFyskjo=: 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.741 18:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.310 nvme0n1 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlZDk1ZTkwNmFkMmQ0NjhiMTA5YWEwNTM3MTg5Zjn7Gtn3: 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlZDk1ZTkwNmFkMmQ0NjhiMTA5YWEwNTM3MTg5Zjn7Gtn3: 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: ]] 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjMyN2ZjMzlkNGUxOTJmMzZjMWM5ZGMxOGIxNzJiMzBkNjA4ODYzYTFmNTBkNWMzMDNhYmY1ZjA0ZDQ4ZTY3OUNNZPQ=: 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.310 18:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.247 nvme0n1 00:22:55.247 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.247 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:55.247 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.247 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:55.247 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.247 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.247 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.247 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:55.247 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.247 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.247 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.247 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:55.247 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:22:55.247 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:55.247 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:55.247 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:55.247 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:55.247 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg3OWMxODJhOTA3MDBkNDM4M2VkNDJlYjk0MWM2ZWZkYmNiZTNjMmFiOWQyNTQ1OFqiIw==: 00:22:55.247 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: 00:22:55.247 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:55.247 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:55.247 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg3OWMxODJhOTA3MDBkNDM4M2VkNDJlYjk0MWM2ZWZkYmNiZTNjMmFiOWQyNTQ1OFqiIw==: 00:22:55.247 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: ]] 00:22:55.247 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: 00:22:55.247 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:22:55.247 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:55.247 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:55.247 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:55.247 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:55.247 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:55.248 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:55.248 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.248 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.248 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.248 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:55.248 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:55.248 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:55.248 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:55.248 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:55.248 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:55.248 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:55.248 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:55.248 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:55.248 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:55.248 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:55.248 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:55.248 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.248 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.815 nvme0n1 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWI5ZmM5ZDkyMzllZmM5NGZlYjY5MDQ2YmNkOTM1NWUozsfy: 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWI5ZmM5ZDkyMzllZmM5NGZlYjY5MDQ2YmNkOTM1NWUozsfy: 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: ]] 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.815 18:28:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.751 nvme0n1 00:22:56.751 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.751 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:56.751 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.751 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:56.751 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.751 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.751 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.751 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:56.752 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.752 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.752 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.752 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:56.752 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:22:56.752 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:56.752 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:56.752 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:56.752 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:56.752 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjU0ZmYyNDEzZGIxZGIwYmQ2ODQzYWFlN2ZhNGVhZmIxNWQ0YWFhYWNjMmU5YTZkcuBM4A==: 00:22:56.752 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: 00:22:56.752 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:56.752 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:56.752 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjU0ZmYyNDEzZGIxZGIwYmQ2ODQzYWFlN2ZhNGVhZmIxNWQ0YWFhYWNjMmU5YTZkcuBM4A==: 00:22:56.752 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: ]] 00:22:56.752 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2M2M2FlYjExOGE1NmM2YzQwYTJjMWY1M2FjNmExNmQmx4K2: 00:22:56.752 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:22:56.752 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:56.752 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:56.752 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:56.752 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:56.752 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:56.752 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:56.752 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.752 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.752 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.752 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:56.752 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:56.752 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:56.752 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:56.752 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:56.752 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:56.752 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:56.752 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:56.752 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:56.752 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:56.752 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:56.752 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:56.752 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.752 18:28:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.689 nvme0n1 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWJjOGU5YjVhY2E1M2UyM2ExZTE4MDkxOWRlMzJmMmYwNjBiNWI5MTJkZTA4ZGMwYzdjNDczYmZmOTk4YTU1OFyskjo=: 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWJjOGU5YjVhY2E1M2UyM2ExZTE4MDkxOWRlMzJmMmYwNjBiNWI5MTJkZTA4ZGMwYzdjNDczYmZmOTk4YTU1OFyskjo=: 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.689 18:28:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.257 nvme0n1 00:22:58.257 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.257 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:58.257 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.257 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:58.257 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.257 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.257 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.257 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:58.257 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.257 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.257 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.257 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:58.257 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:58.257 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:58.257 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:58.257 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:58.257 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg3OWMxODJhOTA3MDBkNDM4M2VkNDJlYjk0MWM2ZWZkYmNiZTNjMmFiOWQyNTQ1OFqiIw==: 00:22:58.257 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: 00:22:58.257 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:58.257 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:58.257 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg3OWMxODJhOTA3MDBkNDM4M2VkNDJlYjk0MWM2ZWZkYmNiZTNjMmFiOWQyNTQ1OFqiIw==: 00:22:58.257 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: ]] 00:22:58.257 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: 00:22:58.257 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:58.257 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.257 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.258 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.258 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:22:58.258 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:58.258 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:58.258 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:58.258 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:58.258 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:58.258 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:58.258 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:58.258 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:58.258 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:58.258 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:58.258 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:58.258 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:22:58.258 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:58.258 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:58.258 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:58.258 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:58.258 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:58.258 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:58.258 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.258 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.517 request: 00:22:58.517 { 00:22:58.517 "name": "nvme0", 00:22:58.517 "trtype": "rdma", 00:22:58.517 "traddr": "192.168.100.8", 00:22:58.517 "adrfam": "ipv4", 00:22:58.517 "trsvcid": "4420", 00:22:58.517 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:58.517 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:58.517 "prchk_reftag": false, 00:22:58.517 "prchk_guard": false, 00:22:58.517 "hdgst": false, 00:22:58.517 "ddgst": false, 00:22:58.517 "allow_unrecognized_csi": false, 00:22:58.517 "method": "bdev_nvme_attach_controller", 00:22:58.517 "req_id": 1 00:22:58.517 } 00:22:58.517 Got JSON-RPC error response 00:22:58.517 response: 00:22:58.517 { 00:22:58.517 "code": -5, 00:22:58.517 "message": "Input/output error" 00:22:58.517 } 00:22:58.517 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:58.517 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:22:58.517 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:58.517 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:58.517 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:58.517 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:22:58.517 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:22:58.517 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.517 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.517 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.517 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:22:58.517 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:22:58.517 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:58.517 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:58.517 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:58.517 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:58.517 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:58.517 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:58.517 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:58.517 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:58.517 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:58.517 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:58.517 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:58.517 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:22:58.517 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:58.517 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:58.517 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:58.517 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:58.517 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:58.517 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:58.517 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.517 18:28:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.085 request: 00:22:59.085 { 00:22:59.085 "name": "nvme0", 00:22:59.085 "trtype": "rdma", 00:22:59.085 "traddr": "192.168.100.8", 00:22:59.085 "adrfam": "ipv4", 00:22:59.085 "trsvcid": "4420", 00:22:59.085 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:59.085 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:59.085 "prchk_reftag": false, 00:22:59.085 "prchk_guard": false, 00:22:59.085 "hdgst": false, 00:22:59.085 "ddgst": false, 00:22:59.085 "dhchap_key": "key2", 00:22:59.085 "allow_unrecognized_csi": false, 00:22:59.085 "method": "bdev_nvme_attach_controller", 00:22:59.085 "req_id": 1 00:22:59.085 } 00:22:59.085 Got JSON-RPC error response 00:22:59.085 response: 00:22:59.085 { 00:22:59.085 "code": -5, 00:22:59.085 "message": "Input/output error" 00:22:59.085 } 00:22:59.085 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:59.085 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:22:59.085 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:59.085 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:59.085 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:59.085 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:22:59.085 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:22:59.085 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.085 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.085 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.085 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:22:59.085 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:22:59.085 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:59.085 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:59.085 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:59.085 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:59.085 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:59.085 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:59.085 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:59.085 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:59.085 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:59.085 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:59.085 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:59.086 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:22:59.086 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:59.086 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:59.086 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:59.086 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:59.086 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:59.086 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:59.086 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.086 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.360 request: 00:22:59.360 { 00:22:59.360 "name": "nvme0", 00:22:59.360 "trtype": "rdma", 00:22:59.360 "traddr": "192.168.100.8", 00:22:59.360 "adrfam": "ipv4", 00:22:59.360 "trsvcid": "4420", 00:22:59.360 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:59.360 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:59.360 "prchk_reftag": false, 00:22:59.360 "prchk_guard": false, 00:22:59.360 "hdgst": false, 00:22:59.360 "ddgst": false, 00:22:59.360 "dhchap_key": "key1", 00:22:59.360 "dhchap_ctrlr_key": "ckey2", 00:22:59.360 "allow_unrecognized_csi": false, 00:22:59.360 "method": "bdev_nvme_attach_controller", 00:22:59.360 "req_id": 1 00:22:59.360 } 00:22:59.360 Got JSON-RPC error response 00:22:59.360 response: 00:22:59.360 { 00:22:59.360 "code": -5, 00:22:59.360 "message": "Input/output error" 00:22:59.360 } 00:22:59.360 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:59.360 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:22:59.360 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:59.360 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:59.360 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:59.360 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:22:59.360 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:59.360 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:59.360 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:59.360 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:59.360 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:59.360 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:59.360 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:59.360 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:59.360 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:59.360 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:59.360 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:59.361 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.361 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.638 nvme0n1 00:22:59.638 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.638 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:22:59.638 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:59.638 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:59.638 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:59.638 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:59.638 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWI5ZmM5ZDkyMzllZmM5NGZlYjY5MDQ2YmNkOTM1NWUozsfy: 00:22:59.638 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: 00:22:59.638 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:59.638 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:59.638 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWI5ZmM5ZDkyMzllZmM5NGZlYjY5MDQ2YmNkOTM1NWUozsfy: 00:22:59.638 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: ]] 00:22:59.638 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: 00:22:59.638 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:59.639 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.639 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.639 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.639 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:22:59.639 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:22:59.639 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.639 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.898 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.898 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.898 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:59.898 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:22:59.898 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:59.898 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:59.898 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:59.898 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:59.898 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:59.898 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:59.898 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.898 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.898 request: 00:22:59.898 { 00:22:59.898 "name": "nvme0", 00:22:59.898 "dhchap_key": "key1", 00:22:59.898 "dhchap_ctrlr_key": "ckey2", 00:22:59.898 "method": "bdev_nvme_set_keys", 00:22:59.898 "req_id": 1 00:22:59.898 } 00:22:59.898 Got JSON-RPC error response 00:22:59.898 response: 00:22:59.898 { 00:22:59.898 "code": -13, 00:22:59.898 "message": "Permission denied" 00:22:59.898 } 00:22:59.898 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:59.898 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:22:59.898 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:59.898 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:59.898 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:59.898 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:22:59.898 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:22:59.898 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.898 18:28:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.157 18:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.157 18:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:23:00.157 18:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:23:01.093 18:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:23:01.093 18:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:23:01.093 18:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.093 18:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.093 18:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.093 18:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:23:01.093 18:28:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:23:02.029 18:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:23:02.029 18:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:23:02.029 18:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.029 18:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.029 18:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.029 18:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:23:02.029 18:28:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg3OWMxODJhOTA3MDBkNDM4M2VkNDJlYjk0MWM2ZWZkYmNiZTNjMmFiOWQyNTQ1OFqiIw==: 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg3OWMxODJhOTA3MDBkNDM4M2VkNDJlYjk0MWM2ZWZkYmNiZTNjMmFiOWQyNTQ1OFqiIw==: 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: ]] 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjg5MjU3N2Y3MThjZTVlNWViNDVkMGNmYWU2ODNiZGM4MTQ5N2RhY2UxNDUzZmNmEvnQ4w==: 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.406 nvme0n1 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWI5ZmM5ZDkyMzllZmM5NGZlYjY5MDQ2YmNkOTM1NWUozsfy: 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWI5ZmM5ZDkyMzllZmM5NGZlYjY5MDQ2YmNkOTM1NWUozsfy: 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: ]] 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGYyZmRjM2Y2NWZjNjIyMzdmNTY3NTk3NmNlN2M2ODBNeSlH: 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.406 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.665 request: 00:23:03.665 { 00:23:03.665 "name": "nvme0", 00:23:03.665 "dhchap_key": "key2", 00:23:03.665 "dhchap_ctrlr_key": "ckey1", 00:23:03.665 "method": "bdev_nvme_set_keys", 00:23:03.665 "req_id": 1 00:23:03.665 } 00:23:03.665 Got JSON-RPC error response 00:23:03.665 response: 00:23:03.665 { 00:23:03.665 "code": -13, 00:23:03.665 "message": "Permission denied" 00:23:03.665 } 00:23:03.665 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:03.665 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:23:03.665 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:03.665 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:03.665 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:03.665 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:23:03.665 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:23:03.665 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.665 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.665 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.665 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:23:03.665 18:28:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:23:04.602 18:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:23:04.602 18:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:23:04.602 18:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.602 18:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.861 18:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.861 18:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:23:04.861 18:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:23:05.798 18:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.798 18:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:23:06.058 18:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.058 18:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.058 18:28:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.058 18:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:23:06.058 18:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:23:06.058 18:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:23:06.058 18:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:23:06.058 18:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:06.058 18:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:23:06.058 18:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:23:06.058 18:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:23:06.058 18:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:23:06.058 18:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:06.058 18:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:23:06.058 rmmod nvme_rdma 00:23:06.058 rmmod nvme_fabrics 00:23:06.058 18:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:06.058 18:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:23:06.058 18:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:23:06.058 18:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 3492658 ']' 00:23:06.058 18:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 3492658 00:23:06.058 18:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 3492658 ']' 00:23:06.058 18:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 3492658 00:23:06.058 18:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:23:06.058 18:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:06.058 18:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3492658 00:23:06.058 18:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:06.058 18:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:06.058 18:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3492658' 00:23:06.058 killing process with pid 3492658 00:23:06.058 18:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 3492658 00:23:06.058 18:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 3492658 00:23:06.318 18:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:06.318 18:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:23:06.318 18:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:06.318 18:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:06.318 18:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:23:06.318 18:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:23:06.318 18:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:23:06.318 18:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:06.318 18:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:06.318 18:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:06.318 18:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:06.318 18:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:23:06.318 18:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_rdma nvmet 00:23:06.318 18:28:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:23:09.607 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:23:09.866 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:23:09.866 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:23:09.866 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:23:10.125 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:23:10.125 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:23:10.125 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:23:10.384 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:23:10.384 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:23:10.384 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:23:10.384 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:23:10.643 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:23:10.643 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:23:10.643 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:23:10.902 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:23:10.902 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:23:13.438 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:23:14.007 18:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.vKI /tmp/spdk.key-null.GdQ /tmp/spdk.key-sha256.Ot2 /tmp/spdk.key-sha384.8HK /tmp/spdk.key-sha512.UED /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:23:14.007 18:28:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:23:17.300 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:23:17.300 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:23:17.300 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:23:17.300 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:23:17.300 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:23:17.300 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:23:17.300 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:23:17.300 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:23:17.300 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:23:17.300 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:23:17.300 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:23:17.300 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:23:17.300 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:23:17.300 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:23:17.300 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:23:17.300 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:23:17.300 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:23:17.868 00:23:17.868 real 1m17.206s 00:23:17.868 user 0m52.476s 00:23:17.868 sys 0m17.679s 00:23:17.868 18:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:17.868 18:28:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.868 ************************************ 00:23:17.868 END TEST nvmf_auth_host 00:23:17.868 ************************************ 00:23:17.868 18:28:30 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ rdma == \t\c\p ]] 00:23:17.868 18:28:30 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:23:17.868 18:28:30 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:23:17.868 18:28:30 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:23:17.868 18:28:30 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:23:17.868 18:28:30 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:17.868 18:28:30 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:17.868 18:28:30 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.868 ************************************ 00:23:17.868 START TEST nvmf_bdevperf 00:23:17.868 ************************************ 00:23:17.868 18:28:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:23:18.128 * Looking for test storage... 00:23:18.128 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:18.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.128 --rc genhtml_branch_coverage=1 00:23:18.128 --rc genhtml_function_coverage=1 00:23:18.128 --rc genhtml_legend=1 00:23:18.128 --rc geninfo_all_blocks=1 00:23:18.128 --rc geninfo_unexecuted_blocks=1 00:23:18.128 00:23:18.128 ' 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:18.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.128 --rc genhtml_branch_coverage=1 00:23:18.128 --rc genhtml_function_coverage=1 00:23:18.128 --rc genhtml_legend=1 00:23:18.128 --rc geninfo_all_blocks=1 00:23:18.128 --rc geninfo_unexecuted_blocks=1 00:23:18.128 00:23:18.128 ' 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:18.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.128 --rc genhtml_branch_coverage=1 00:23:18.128 --rc genhtml_function_coverage=1 00:23:18.128 --rc genhtml_legend=1 00:23:18.128 --rc geninfo_all_blocks=1 00:23:18.128 --rc geninfo_unexecuted_blocks=1 00:23:18.128 00:23:18.128 ' 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:18.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.128 --rc genhtml_branch_coverage=1 00:23:18.128 --rc genhtml_function_coverage=1 00:23:18.128 --rc genhtml_legend=1 00:23:18.128 --rc geninfo_all_blocks=1 00:23:18.128 --rc geninfo_unexecuted_blocks=1 00:23:18.128 00:23:18.128 ' 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:18.128 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:18.128 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:23:18.129 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:23:18.129 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:18.129 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:18.129 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:18.129 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:18.129 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.129 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:18.129 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.129 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:18.129 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:18.129 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:18.129 18:28:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:23:24.700 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:23:24.700 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:23:24.700 Found net devices under 0000:18:00.0: mlx_0_0 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:23:24.700 Found net devices under 0000:18:00.1: mlx_0_1 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # is_hw=yes 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # rdma_device_init 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # uname 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe ib_core 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:23:24.700 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:23:24.960 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:23:24.960 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:23:24.960 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@528 -- # allocate_nic_ips 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:23:24.961 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:24.961 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:23:24.961 altname enp24s0f0np0 00:23:24.961 altname ens785f0np0 00:23:24.961 inet 192.168.100.8/24 scope global mlx_0_0 00:23:24.961 valid_lft forever preferred_lft forever 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:23:24.961 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:24.961 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:23:24.961 altname enp24s0f1np1 00:23:24.961 altname ens785f1np1 00:23:24.961 inet 192.168.100.9/24 scope global mlx_0_1 00:23:24.961 valid_lft forever preferred_lft forever 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # return 0 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:24.961 18:28:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:23:24.961 192.168.100.9' 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:23:24.961 192.168.100.9' 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # head -n 1 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:23:24.961 192.168.100.9' 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # tail -n +2 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # head -n 1 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=3506938 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 3506938 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 3506938 ']' 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:24.961 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:25.221 [2024-10-08 18:28:38.155325] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:23:25.221 [2024-10-08 18:28:38.155384] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.221 [2024-10-08 18:28:38.241610] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:25.221 [2024-10-08 18:28:38.324722] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.221 [2024-10-08 18:28:38.324763] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.221 [2024-10-08 18:28:38.324772] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.221 [2024-10-08 18:28:38.324781] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.221 [2024-10-08 18:28:38.324787] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.221 [2024-10-08 18:28:38.325653] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:23:25.221 [2024-10-08 18:28:38.325756] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.221 [2024-10-08 18:28:38.325757] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:23:26.157 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:26.157 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:23:26.157 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:26.157 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:26.157 18:28:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:26.157 18:28:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:26.157 18:28:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:26.157 18:28:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.157 18:28:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:26.157 [2024-10-08 18:28:39.104088] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1fd1ab0/0x1fd5fa0) succeed. 00:23:26.157 [2024-10-08 18:28:39.115640] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1fd3050/0x2017640) succeed. 00:23:26.157 18:28:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.157 18:28:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:26.157 18:28:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.157 18:28:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:26.157 Malloc0 00:23:26.157 18:28:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.157 18:28:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:26.157 18:28:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.157 18:28:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:26.157 18:28:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.157 18:28:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:26.157 18:28:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.157 18:28:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:26.157 18:28:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.157 18:28:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:26.157 18:28:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.157 18:28:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:26.157 [2024-10-08 18:28:39.258301] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:26.157 18:28:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.157 18:28:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:23:26.157 18:28:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:23:26.157 18:28:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:23:26.157 18:28:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:23:26.157 18:28:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:26.157 18:28:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:26.157 { 00:23:26.157 "params": { 00:23:26.157 "name": "Nvme$subsystem", 00:23:26.157 "trtype": "$TEST_TRANSPORT", 00:23:26.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.157 "adrfam": "ipv4", 00:23:26.157 "trsvcid": "$NVMF_PORT", 00:23:26.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.157 "hdgst": ${hdgst:-false}, 00:23:26.157 "ddgst": ${ddgst:-false} 00:23:26.157 }, 00:23:26.157 "method": "bdev_nvme_attach_controller" 00:23:26.157 } 00:23:26.157 EOF 00:23:26.157 )") 00:23:26.157 18:28:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:23:26.157 18:28:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:23:26.157 18:28:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:23:26.157 18:28:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:23:26.157 "params": { 00:23:26.157 "name": "Nvme1", 00:23:26.157 "trtype": "rdma", 00:23:26.157 "traddr": "192.168.100.8", 00:23:26.157 "adrfam": "ipv4", 00:23:26.157 "trsvcid": "4420", 00:23:26.157 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.157 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:26.157 "hdgst": false, 00:23:26.157 "ddgst": false 00:23:26.157 }, 00:23:26.157 "method": "bdev_nvme_attach_controller" 00:23:26.157 }' 00:23:26.157 [2024-10-08 18:28:39.311852] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:23:26.157 [2024-10-08 18:28:39.311912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3507024 ] 00:23:26.416 [2024-10-08 18:28:39.397132] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.416 [2024-10-08 18:28:39.482422] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.675 Running I/O for 1 seconds... 00:23:27.612 17920.00 IOPS, 70.00 MiB/s 00:23:27.612 Latency(us) 00:23:27.612 [2024-10-08T16:28:40.785Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.612 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:27.612 Verification LBA range: start 0x0 length 0x4000 00:23:27.612 Nvme1n1 : 1.01 17959.89 70.16 0.00 0.00 7085.10 1994.57 11169.61 00:23:27.612 [2024-10-08T16:28:40.785Z] =================================================================================================================== 00:23:27.612 [2024-10-08T16:28:40.785Z] Total : 17959.89 70.16 0.00 0.00 7085.10 1994.57 11169.61 00:23:27.871 18:28:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3507314 00:23:27.871 18:28:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:23:27.871 18:28:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:23:27.871 18:28:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:23:27.871 18:28:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:23:27.871 18:28:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:23:27.871 18:28:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:27.871 18:28:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:27.871 { 00:23:27.871 "params": { 00:23:27.871 "name": "Nvme$subsystem", 00:23:27.871 "trtype": "$TEST_TRANSPORT", 00:23:27.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.871 "adrfam": "ipv4", 00:23:27.871 "trsvcid": "$NVMF_PORT", 00:23:27.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.871 "hdgst": ${hdgst:-false}, 00:23:27.871 "ddgst": ${ddgst:-false} 00:23:27.871 }, 00:23:27.871 "method": "bdev_nvme_attach_controller" 00:23:27.871 } 00:23:27.871 EOF 00:23:27.871 )") 00:23:27.871 18:28:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:23:27.871 18:28:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:23:27.871 18:28:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:23:27.871 18:28:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:23:27.871 "params": { 00:23:27.871 "name": "Nvme1", 00:23:27.871 "trtype": "rdma", 00:23:27.871 "traddr": "192.168.100.8", 00:23:27.871 "adrfam": "ipv4", 00:23:27.871 "trsvcid": "4420", 00:23:27.871 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.871 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:27.871 "hdgst": false, 00:23:27.871 "ddgst": false 00:23:27.871 }, 00:23:27.871 "method": "bdev_nvme_attach_controller" 00:23:27.871 }' 00:23:27.871 [2024-10-08 18:28:40.953967] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:23:27.871 [2024-10-08 18:28:40.954040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3507314 ] 00:23:27.871 [2024-10-08 18:28:41.040813] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.131 [2024-10-08 18:28:41.126798] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.390 Running I/O for 15 seconds... 00:23:30.265 18081.00 IOPS, 70.63 MiB/s [2024-10-08T16:28:44.007Z] 18176.00 IOPS, 71.00 MiB/s [2024-10-08T16:28:44.007Z] 18:28:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3506938 00:23:30.834 18:28:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:23:32.001 15709.67 IOPS, 61.37 MiB/s [2024-10-08T16:28:45.174Z] [2024-10-08 18:28:44.940319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:114888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x180800 00:23:32.001 [2024-10-08 18:28:44.940360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.001 [2024-10-08 18:28:44.940395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:114896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x180800 00:23:32.001 [2024-10-08 18:28:44.940406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.001 [2024-10-08 18:28:44.940419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:114904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x180800 00:23:32.001 [2024-10-08 18:28:44.940429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.001 [2024-10-08 18:28:44.940441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:114912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x180800 00:23:32.001 [2024-10-08 18:28:44.940451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.001 [2024-10-08 18:28:44.940464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:114920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x180800 00:23:32.001 [2024-10-08 18:28:44.940474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.001 [2024-10-08 18:28:44.940485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:114928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x180800 00:23:32.001 [2024-10-08 18:28:44.940493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.001 [2024-10-08 18:28:44.940504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:114936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x180800 00:23:32.001 [2024-10-08 18:28:44.940518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.001 [2024-10-08 18:28:44.940529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:114944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x180800 00:23:32.001 [2024-10-08 18:28:44.940538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.001 [2024-10-08 18:28:44.940549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:114952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x180800 00:23:32.001 [2024-10-08 18:28:44.940558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.001 [2024-10-08 18:28:44.940569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:114960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x180800 00:23:32.001 [2024-10-08 18:28:44.940578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.001 [2024-10-08 18:28:44.940589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:114968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x180800 00:23:32.001 [2024-10-08 18:28:44.940598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.001 [2024-10-08 18:28:44.940609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:114976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x180800 00:23:32.001 [2024-10-08 18:28:44.940618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.001 [2024-10-08 18:28:44.940629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:114984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x180800 00:23:32.001 [2024-10-08 18:28:44.940638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.001 [2024-10-08 18:28:44.940649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:114992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x180800 00:23:32.001 [2024-10-08 18:28:44.940658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.001 [2024-10-08 18:28:44.940669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:115000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x180800 00:23:32.001 [2024-10-08 18:28:44.940677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.001 [2024-10-08 18:28:44.940688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:115008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x180800 00:23:32.001 [2024-10-08 18:28:44.940697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.001 [2024-10-08 18:28:44.940707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:115016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x180800 00:23:32.001 [2024-10-08 18:28:44.940716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.001 [2024-10-08 18:28:44.940728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:115024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x180800 00:23:32.001 [2024-10-08 18:28:44.940738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.001 [2024-10-08 18:28:44.940750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:115032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x180800 00:23:32.001 [2024-10-08 18:28:44.940759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.001 [2024-10-08 18:28:44.940770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:115040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x180800 00:23:32.001 [2024-10-08 18:28:44.940779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.001 [2024-10-08 18:28:44.940789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:115048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x180800 00:23:32.001 [2024-10-08 18:28:44.940798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.001 [2024-10-08 18:28:44.940809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:115056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x180800 00:23:32.001 [2024-10-08 18:28:44.940818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.001 [2024-10-08 18:28:44.940828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:115064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x180800 00:23:32.001 [2024-10-08 18:28:44.940837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.001 [2024-10-08 18:28:44.940848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:115072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x180800 00:23:32.001 [2024-10-08 18:28:44.940857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.001 [2024-10-08 18:28:44.940867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:115080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x180800 00:23:32.001 [2024-10-08 18:28:44.940876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.001 [2024-10-08 18:28:44.940887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:115088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x180800 00:23:32.001 [2024-10-08 18:28:44.940896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.001 [2024-10-08 18:28:44.940906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:115096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x180800 00:23:32.001 [2024-10-08 18:28:44.940916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.001 [2024-10-08 18:28:44.940926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:115104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x180800 00:23:32.001 [2024-10-08 18:28:44.940935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.002 [2024-10-08 18:28:44.940945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:115112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x180800 00:23:32.002 [2024-10-08 18:28:44.940955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.002 [2024-10-08 18:28:44.940967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:115120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x180800 00:23:32.002 [2024-10-08 18:28:44.940976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.002 [2024-10-08 18:28:44.940986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:115128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x180800 00:23:32.002 [2024-10-08 18:28:44.940995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.002 [2024-10-08 18:28:44.941011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:115136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x180800 00:23:32.002 [2024-10-08 18:28:44.941019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.002 [2024-10-08 18:28:44.941031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x180800 00:23:32.002 [2024-10-08 18:28:44.941039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.002 [2024-10-08 18:28:44.941051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x180800 00:23:32.002 [2024-10-08 18:28:44.941060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.002 [2024-10-08 18:28:44.941071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:115160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x180800 00:23:32.002 [2024-10-08 18:28:44.941080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.002 [2024-10-08 18:28:44.941090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:115168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x180800 00:23:32.002 [2024-10-08 18:28:44.941099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.002 [2024-10-08 18:28:44.941110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:115176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x180800 00:23:32.002 [2024-10-08 18:28:44.941119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.002 [2024-10-08 18:28:44.941129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:115184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x180800 00:23:32.002 [2024-10-08 18:28:44.941138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.002 [2024-10-08 18:28:44.941149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:115192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x180800 00:23:32.002 [2024-10-08 18:28:44.941158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.002 [2024-10-08 18:28:44.941168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:115200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x180800 00:23:32.002 [2024-10-08 18:28:44.941178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.002 [2024-10-08 18:28:44.941190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:115208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x180800 00:23:32.002 [2024-10-08 18:28:44.941199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.002 [2024-10-08 18:28:44.941210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:115216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x180800 00:23:32.002 [2024-10-08 18:28:44.941218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.002 [2024-10-08 18:28:44.941229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:115224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x180800 00:23:32.002 [2024-10-08 18:28:44.941237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.002 [2024-10-08 18:28:44.941248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:115232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x180800 00:23:32.002 [2024-10-08 18:28:44.941257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.002 [2024-10-08 18:28:44.941267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:115240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x180800 00:23:32.002 [2024-10-08 18:28:44.941276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.002 [2024-10-08 18:28:44.941287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:115248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x180800 00:23:32.002 [2024-10-08 18:28:44.941295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.002 [2024-10-08 18:28:44.941306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:115256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x180800 00:23:32.002 [2024-10-08 18:28:44.941314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.002 [2024-10-08 18:28:44.941325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:115264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x180800 00:23:32.002 [2024-10-08 18:28:44.941334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.002 [2024-10-08 18:28:44.941345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:115272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x180800 00:23:32.002 [2024-10-08 18:28:44.941353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.002 [2024-10-08 18:28:44.941364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:115280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x180800 00:23:32.002 [2024-10-08 18:28:44.941373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.002 [2024-10-08 18:28:44.941384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x180800 00:23:32.002 [2024-10-08 18:28:44.941393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.002 [2024-10-08 18:28:44.941403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:115296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x180800 00:23:32.002 [2024-10-08 18:28:44.941414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.002 [2024-10-08 18:28:44.941424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:115304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x180800 00:23:32.002 [2024-10-08 18:28:44.941433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.002 [2024-10-08 18:28:44.941443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:115312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x180800 00:23:32.002 [2024-10-08 18:28:44.941452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.002 [2024-10-08 18:28:44.941463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:115320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x180800 00:23:32.002 [2024-10-08 18:28:44.941472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.002 [2024-10-08 18:28:44.941482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:115328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x180800 00:23:32.002 [2024-10-08 18:28:44.941491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.002 [2024-10-08 18:28:44.941501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:115336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x180800 00:23:32.002 [2024-10-08 18:28:44.941510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.002 [2024-10-08 18:28:44.941521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:115344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x180800 00:23:32.002 [2024-10-08 18:28:44.941530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.002 [2024-10-08 18:28:44.941540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:115352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x180800 00:23:32.002 [2024-10-08 18:28:44.941549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.002 [2024-10-08 18:28:44.941559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:115360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x180800 00:23:32.002 [2024-10-08 18:28:44.941568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.002 [2024-10-08 18:28:44.941579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:115368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x180800 00:23:32.002 [2024-10-08 18:28:44.941588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.002 [2024-10-08 18:28:44.941598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:115376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x180800 00:23:32.002 [2024-10-08 18:28:44.941607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.002 [2024-10-08 18:28:44.941618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:115384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x180800 00:23:32.002 [2024-10-08 18:28:44.941626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.002 [2024-10-08 18:28:44.941638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:115392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x180800 00:23:32.002 [2024-10-08 18:28:44.941647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.002 [2024-10-08 18:28:44.941658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:115400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x180800 00:23:32.002 [2024-10-08 18:28:44.941666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.002 [2024-10-08 18:28:44.941678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:115408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x180800 00:23:32.002 [2024-10-08 18:28:44.941689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.002 [2024-10-08 18:28:44.941699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:115416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x180800 00:23:32.003 [2024-10-08 18:28:44.941708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.003 [2024-10-08 18:28:44.941718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:115424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x180800 00:23:32.003 [2024-10-08 18:28:44.941727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.003 [2024-10-08 18:28:44.941738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:115432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x180800 00:23:32.003 [2024-10-08 18:28:44.941747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.003 [2024-10-08 18:28:44.941758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x180800 00:23:32.003 [2024-10-08 18:28:44.941766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.003 [2024-10-08 18:28:44.941777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:115448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x180800 00:23:32.003 [2024-10-08 18:28:44.941786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.003 [2024-10-08 18:28:44.941797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:115456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x180800 00:23:32.003 [2024-10-08 18:28:44.941806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.003 [2024-10-08 18:28:44.941817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:115464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x180800 00:23:32.003 [2024-10-08 18:28:44.941826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.003 [2024-10-08 18:28:44.941837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:115472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x180800 00:23:32.003 [2024-10-08 18:28:44.941847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.003 [2024-10-08 18:28:44.941860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:115480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x180800 00:23:32.003 [2024-10-08 18:28:44.941869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.003 [2024-10-08 18:28:44.941880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:115488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x180800 00:23:32.003 [2024-10-08 18:28:44.941890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.003 [2024-10-08 18:28:44.941901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:115496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x180800 00:23:32.003 [2024-10-08 18:28:44.941909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.003 [2024-10-08 18:28:44.941920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:115504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x180800 00:23:32.003 [2024-10-08 18:28:44.941929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.003 [2024-10-08 18:28:44.941941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:115512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x180800 00:23:32.003 [2024-10-08 18:28:44.941950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.003 [2024-10-08 18:28:44.941961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:115520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x180800 00:23:32.003 [2024-10-08 18:28:44.941970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.003 [2024-10-08 18:28:44.941981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:115528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x180800 00:23:32.003 [2024-10-08 18:28:44.941991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.003 [2024-10-08 18:28:44.942011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:115536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x180800 00:23:32.003 [2024-10-08 18:28:44.942020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.003 [2024-10-08 18:28:44.942032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:115544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x180800 00:23:32.003 [2024-10-08 18:28:44.942042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.003 [2024-10-08 18:28:44.942053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:115552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x180800 00:23:32.003 [2024-10-08 18:28:44.942062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.003 [2024-10-08 18:28:44.942073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:115560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x180800 00:23:32.003 [2024-10-08 18:28:44.942083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.003 [2024-10-08 18:28:44.942094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:115568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x180800 00:23:32.003 [2024-10-08 18:28:44.942105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.003 [2024-10-08 18:28:44.942115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:115576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x180800 00:23:32.003 [2024-10-08 18:28:44.942125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.003 [2024-10-08 18:28:44.942136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:115584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x180800 00:23:32.003 [2024-10-08 18:28:44.942145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.003 [2024-10-08 18:28:44.942156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:115592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x180800 00:23:32.003 [2024-10-08 18:28:44.942165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.003 [2024-10-08 18:28:44.942177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:115600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x180800 00:23:32.003 [2024-10-08 18:28:44.942187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.003 [2024-10-08 18:28:44.942198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:115608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x180800 00:23:32.003 [2024-10-08 18:28:44.942206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.003 [2024-10-08 18:28:44.942218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x180800 00:23:32.003 [2024-10-08 18:28:44.942228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.003 [2024-10-08 18:28:44.942239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:115624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x180800 00:23:32.003 [2024-10-08 18:28:44.942248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.003 [2024-10-08 18:28:44.942259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:115632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x180800 00:23:32.003 [2024-10-08 18:28:44.942268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.003 [2024-10-08 18:28:44.942280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:115640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x180800 00:23:32.003 [2024-10-08 18:28:44.942290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.003 [2024-10-08 18:28:44.942300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:115648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x180800 00:23:32.003 [2024-10-08 18:28:44.942309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.003 [2024-10-08 18:28:44.942319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:115656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x180800 00:23:32.003 [2024-10-08 18:28:44.942329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.003 [2024-10-08 18:28:44.942344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:115664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x180800 00:23:32.003 [2024-10-08 18:28:44.942354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.003 [2024-10-08 18:28:44.942364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:115672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x180800 00:23:32.003 [2024-10-08 18:28:44.942374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.003 [2024-10-08 18:28:44.942386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:115680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x180800 00:23:32.003 [2024-10-08 18:28:44.942395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.003 [2024-10-08 18:28:44.942406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:115688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x180800 00:23:32.003 [2024-10-08 18:28:44.942415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.003 [2024-10-08 18:28:44.942426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:115696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x180800 00:23:32.003 [2024-10-08 18:28:44.942436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.003 [2024-10-08 18:28:44.942447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:115704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x180800 00:23:32.003 [2024-10-08 18:28:44.942457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.003 [2024-10-08 18:28:44.942467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:115712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.003 [2024-10-08 18:28:44.942476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.003 [2024-10-08 18:28:44.942486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:115720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.004 [2024-10-08 18:28:44.942495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.004 [2024-10-08 18:28:44.942506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:115728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.004 [2024-10-08 18:28:44.942517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.004 [2024-10-08 18:28:44.942527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:115736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.004 [2024-10-08 18:28:44.942536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.004 [2024-10-08 18:28:44.942546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:115744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.004 [2024-10-08 18:28:44.942555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.004 [2024-10-08 18:28:44.942566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:115752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.004 [2024-10-08 18:28:44.942577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.004 [2024-10-08 18:28:44.942587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:115760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.004 [2024-10-08 18:28:44.942596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.004 [2024-10-08 18:28:44.942606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.004 [2024-10-08 18:28:44.942614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.004 [2024-10-08 18:28:44.942625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:115776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.004 [2024-10-08 18:28:44.942634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.004 [2024-10-08 18:28:44.942644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:115784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.004 [2024-10-08 18:28:44.942653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.004 [2024-10-08 18:28:44.942664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:115792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.004 [2024-10-08 18:28:44.942673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.004 [2024-10-08 18:28:44.942684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:115800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.004 [2024-10-08 18:28:44.942693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.004 [2024-10-08 18:28:44.942703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:115808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.004 [2024-10-08 18:28:44.942712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.004 [2024-10-08 18:28:44.942723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:115816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.004 [2024-10-08 18:28:44.942732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.004 [2024-10-08 18:28:44.942743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:115824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.004 [2024-10-08 18:28:44.942752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.004 [2024-10-08 18:28:44.942762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:115832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.004 [2024-10-08 18:28:44.942771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.004 [2024-10-08 18:28:44.942781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:115840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.004 [2024-10-08 18:28:44.942790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.004 [2024-10-08 18:28:44.942800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:115848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.004 [2024-10-08 18:28:44.942809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.004 [2024-10-08 18:28:44.942821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:115856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.004 [2024-10-08 18:28:44.942830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.004 [2024-10-08 18:28:44.942841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:115864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.004 [2024-10-08 18:28:44.942850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.004 [2024-10-08 18:28:44.942861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:115872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.004 [2024-10-08 18:28:44.942871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.004 [2024-10-08 18:28:44.942882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.004 [2024-10-08 18:28:44.942891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.004 [2024-10-08 18:28:44.951649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:115888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.004 [2024-10-08 18:28:44.951677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.004 [2024-10-08 18:28:44.951690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:115896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.004 [2024-10-08 18:28:44.951702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f249a000 sqhd:7250 p:0 m:0 dnr:0 00:23:32.004 [2024-10-08 18:28:44.953616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.004 [2024-10-08 18:28:44.953637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.004 [2024-10-08 18:28:44.953648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115904 len:8 PRP1 0x0 PRP2 0x0 00:23:32.004 [2024-10-08 18:28:44.953660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.004 [2024-10-08 18:28:44.953712] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019ae4900 was disconnected and freed. reset controller. 00:23:32.004 [2024-10-08 18:28:44.953749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.004 [2024-10-08 18:28:44.953763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32765 cdw0:a10730 sqhd:0050 p:0 m:0 dnr:0 00:23:32.004 [2024-10-08 18:28:44.953775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.004 [2024-10-08 18:28:44.953787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32765 cdw0:a10730 sqhd:0050 p:0 m:0 dnr:0 00:23:32.004 [2024-10-08 18:28:44.953799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.004 [2024-10-08 18:28:44.953811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32765 cdw0:a10730 sqhd:0050 p:0 m:0 dnr:0 00:23:32.004 [2024-10-08 18:28:44.953822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.004 [2024-10-08 18:28:44.953834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32765 cdw0:a10730 sqhd:0050 p:0 m:0 dnr:0 00:23:32.004 [2024-10-08 18:28:44.971868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:32.004 [2024-10-08 18:28:44.971889] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:32.004 [2024-10-08 18:28:44.971901] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:32.004 [2024-10-08 18:28:44.974967] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:32.004 [2024-10-08 18:28:44.977663] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:32.004 [2024-10-08 18:28:44.977685] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:32.004 [2024-10-08 18:28:44.977695] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019aed000 00:23:32.860 11782.25 IOPS, 46.02 MiB/s [2024-10-08T16:28:46.033Z] [2024-10-08 18:28:45.981652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:32.860 [2024-10-08 18:28:45.981679] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:32.860 [2024-10-08 18:28:45.981858] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:32.860 [2024-10-08 18:28:45.981870] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:32.860 [2024-10-08 18:28:45.981881] nvme_ctrlr.c:1114:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:23:32.860 [2024-10-08 18:28:45.983675] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:32.860 [2024-10-08 18:28:45.984645] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:32.860 [2024-10-08 18:28:45.996591] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:32.860 [2024-10-08 18:28:45.999257] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:32.860 [2024-10-08 18:28:45.999278] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:32.860 [2024-10-08 18:28:45.999287] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019aed000 00:23:33.997 9425.80 IOPS, 36.82 MiB/s [2024-10-08T16:28:47.170Z] /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3506938 Killed "${NVMF_APP[@]}" "$@" 00:23:33.997 18:28:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:23:33.997 18:28:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:23:33.997 18:28:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:33.997 18:28:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:33.997 18:28:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:33.997 18:28:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=3508093 00:23:33.997 18:28:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:33.997 18:28:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 3508093 00:23:33.997 18:28:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 3508093 ']' 00:23:33.997 18:28:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.997 18:28:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:33.997 18:28:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.997 18:28:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:33.997 18:28:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:33.997 [2024-10-08 18:28:46.975829] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:23:33.997 [2024-10-08 18:28:46.975888] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.997 [2024-10-08 18:28:47.003231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:33.997 [2024-10-08 18:28:47.003264] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:33.997 [2024-10-08 18:28:47.003443] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:33.997 [2024-10-08 18:28:47.003454] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:33.997 [2024-10-08 18:28:47.003465] nvme_ctrlr.c:1114:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:23:33.997 [2024-10-08 18:28:47.006212] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:33.997 [2024-10-08 18:28:47.009128] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:33.997 [2024-10-08 18:28:47.011662] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:33.997 [2024-10-08 18:28:47.011686] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:33.997 [2024-10-08 18:28:47.011695] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019aed000 00:23:33.997 [2024-10-08 18:28:47.050257] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:33.997 [2024-10-08 18:28:47.142979] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.997 [2024-10-08 18:28:47.143021] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.997 [2024-10-08 18:28:47.143032] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.997 [2024-10-08 18:28:47.143041] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.997 [2024-10-08 18:28:47.143048] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.997 [2024-10-08 18:28:47.143761] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:23:33.997 [2024-10-08 18:28:47.143792] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.997 [2024-10-08 18:28:47.143792] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:23:34.824 7854.83 IOPS, 30.68 MiB/s [2024-10-08T16:28:47.997Z] 18:28:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:34.824 18:28:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:23:34.824 18:28:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:34.824 18:28:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:34.824 18:28:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:34.824 18:28:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:34.824 18:28:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:34.824 18:28:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.824 18:28:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:34.824 [2024-10-08 18:28:47.930856] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x688ab0/0x68cfa0) succeed. 00:23:34.824 [2024-10-08 18:28:47.941521] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x68a050/0x6ce640) succeed. 00:23:35.084 [2024-10-08 18:28:48.015651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:35.084 [2024-10-08 18:28:48.015692] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.084 [2024-10-08 18:28:48.015873] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.084 [2024-10-08 18:28:48.015885] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.084 [2024-10-08 18:28:48.015897] nvme_ctrlr.c:1114:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:23:35.084 [2024-10-08 18:28:48.018661] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.084 [2024-10-08 18:28:48.026164] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.084 [2024-10-08 18:28:48.028776] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:35.084 [2024-10-08 18:28:48.028801] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:35.084 [2024-10-08 18:28:48.028810] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019aed000 00:23:35.084 18:28:48 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.084 18:28:48 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:35.084 18:28:48 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.084 18:28:48 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:35.084 Malloc0 00:23:35.084 18:28:48 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.084 18:28:48 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:35.084 18:28:48 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.084 18:28:48 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:35.084 18:28:48 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.084 18:28:48 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:35.084 18:28:48 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.084 18:28:48 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:35.084 18:28:48 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.084 18:28:48 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:35.084 18:28:48 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.084 18:28:48 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:35.084 [2024-10-08 18:28:48.084116] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:35.084 18:28:48 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.084 18:28:48 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3507314 00:23:35.908 6732.71 IOPS, 26.30 MiB/s [2024-10-08T16:28:49.081Z] [2024-10-08 18:28:49.032613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:35.908 [2024-10-08 18:28:49.032640] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.908 [2024-10-08 18:28:49.032816] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.908 [2024-10-08 18:28:49.032828] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.908 [2024-10-08 18:28:49.032839] nvme_ctrlr.c:1114:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:23:35.908 [2024-10-08 18:28:49.035578] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.908 [2024-10-08 18:28:49.042206] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.168 [2024-10-08 18:28:49.091280] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:37.546 6446.62 IOPS, 25.18 MiB/s [2024-10-08T16:28:51.656Z] 7734.33 IOPS, 30.21 MiB/s [2024-10-08T16:28:52.592Z] 8768.20 IOPS, 34.25 MiB/s [2024-10-08T16:28:53.530Z] 9617.00 IOPS, 37.57 MiB/s [2024-10-08T16:28:54.467Z] 10323.42 IOPS, 40.33 MiB/s [2024-10-08T16:28:55.404Z] 10920.08 IOPS, 42.66 MiB/s [2024-10-08T16:28:56.783Z] 11432.00 IOPS, 44.66 MiB/s [2024-10-08T16:28:56.783Z] 11875.00 IOPS, 46.39 MiB/s 00:23:43.610 Latency(us) 00:23:43.610 [2024-10-08T16:28:56.783Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.610 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:43.610 Verification LBA range: start 0x0 length 0x4000 00:23:43.610 Nvme1n1 : 15.01 11875.38 46.39 13634.36 0.00 4998.25 359.74 1064988.49 00:23:43.610 [2024-10-08T16:28:56.783Z] =================================================================================================================== 00:23:43.610 [2024-10-08T16:28:56.783Z] Total : 11875.38 46.39 13634.36 0.00 4998.25 359.74 1064988.49 00:23:43.610 18:28:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:23:43.610 18:28:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:43.610 18:28:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.610 18:28:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:43.610 18:28:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.610 18:28:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:23:43.610 18:28:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:23:43.610 18:28:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:43.610 18:28:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:23:43.610 18:28:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:23:43.610 18:28:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:23:43.610 18:28:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:23:43.610 18:28:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:43.610 18:28:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:23:43.610 rmmod nvme_rdma 00:23:43.610 rmmod nvme_fabrics 00:23:43.610 18:28:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:43.610 18:28:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:23:43.610 18:28:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:23:43.610 18:28:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@515 -- # '[' -n 3508093 ']' 00:23:43.610 18:28:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # killprocess 3508093 00:23:43.610 18:28:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 3508093 ']' 00:23:43.610 18:28:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 3508093 00:23:43.610 18:28:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:23:43.610 18:28:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:43.610 18:28:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3508093 00:23:43.610 18:28:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:43.610 18:28:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:43.610 18:28:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3508093' 00:23:43.610 killing process with pid 3508093 00:23:43.610 18:28:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 3508093 00:23:43.610 18:28:56 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 3508093 00:23:43.869 18:28:57 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:43.869 18:28:57 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:23:43.869 00:23:43.869 real 0m26.074s 00:23:43.869 user 1m5.449s 00:23:43.869 sys 0m6.636s 00:23:43.869 18:28:57 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:43.869 18:28:57 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:43.869 ************************************ 00:23:43.869 END TEST nvmf_bdevperf 00:23:43.869 ************************************ 00:23:44.128 18:28:57 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:23:44.128 18:28:57 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:44.128 18:28:57 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:44.128 18:28:57 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.128 ************************************ 00:23:44.128 START TEST nvmf_target_disconnect 00:23:44.128 ************************************ 00:23:44.128 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:23:44.128 * Looking for test storage... 00:23:44.128 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:44.128 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:44.128 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:23:44.128 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:44.128 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:44.128 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:44.128 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:44.128 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:44.128 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:23:44.128 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:23:44.128 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:23:44.129 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:23:44.129 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:23:44.129 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:23:44.129 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:23:44.129 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:44.129 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:23:44.129 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:23:44.129 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:44.129 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:44.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.388 --rc genhtml_branch_coverage=1 00:23:44.388 --rc genhtml_function_coverage=1 00:23:44.388 --rc genhtml_legend=1 00:23:44.388 --rc geninfo_all_blocks=1 00:23:44.388 --rc geninfo_unexecuted_blocks=1 00:23:44.388 00:23:44.388 ' 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:44.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.388 --rc genhtml_branch_coverage=1 00:23:44.388 --rc genhtml_function_coverage=1 00:23:44.388 --rc genhtml_legend=1 00:23:44.388 --rc geninfo_all_blocks=1 00:23:44.388 --rc geninfo_unexecuted_blocks=1 00:23:44.388 00:23:44.388 ' 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:44.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.388 --rc genhtml_branch_coverage=1 00:23:44.388 --rc genhtml_function_coverage=1 00:23:44.388 --rc genhtml_legend=1 00:23:44.388 --rc geninfo_all_blocks=1 00:23:44.388 --rc geninfo_unexecuted_blocks=1 00:23:44.388 00:23:44.388 ' 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:44.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.388 --rc genhtml_branch_coverage=1 00:23:44.388 --rc genhtml_function_coverage=1 00:23:44.388 --rc genhtml_legend=1 00:23:44.388 --rc geninfo_all_blocks=1 00:23:44.388 --rc geninfo_unexecuted_blocks=1 00:23:44.388 00:23:44.388 ' 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:44.388 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:44.389 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:44.389 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:44.389 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:44.389 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:44.389 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:44.389 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:23:44.389 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:23:44.389 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:23:44.389 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:23:44.389 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:23:44.389 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:44.389 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:44.389 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:44.389 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:44.389 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.389 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:44.389 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.389 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:44.389 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:44.389 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:23:44.389 18:28:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:50.958 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:23:50.959 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:23:50.959 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:23:50.959 Found net devices under 0000:18:00.0: mlx_0_0 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:23:50.959 Found net devices under 0000:18:00.1: mlx_0_1 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # rdma_device_init 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # uname 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@528 -- # allocate_nic_ips 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:23:50.959 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:23:50.959 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:50.960 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:23:50.960 altname enp24s0f0np0 00:23:50.960 altname ens785f0np0 00:23:50.960 inet 192.168.100.8/24 scope global mlx_0_0 00:23:50.960 valid_lft forever preferred_lft forever 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:23:50.960 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:50.960 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:23:50.960 altname enp24s0f1np1 00:23:50.960 altname ens785f1np1 00:23:50.960 inet 192.168.100.9/24 scope global mlx_0_1 00:23:50.960 valid_lft forever preferred_lft forever 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # return 0 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:23:50.960 192.168.100.9' 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:23:50.960 192.168.100.9' 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # head -n 1 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:23:50.960 192.168.100.9' 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # tail -n +2 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # head -n 1 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:50.960 18:29:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:50.960 ************************************ 00:23:50.960 START TEST nvmf_target_disconnect_tc1 00:23:50.960 ************************************ 00:23:50.960 18:29:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:23:50.960 18:29:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:23:50.960 18:29:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:23:50.960 18:29:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:23:50.960 18:29:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:23:50.960 18:29:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:50.960 18:29:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:23:50.960 18:29:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:50.960 18:29:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:23:50.960 18:29:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:50.960 18:29:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:23:50.960 18:29:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:23:50.960 18:29:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:23:51.219 [2024-10-08 18:29:04.154414] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:51.219 [2024-10-08 18:29:04.154520] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:51.219 [2024-10-08 18:29:04.154551] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7000 00:23:52.156 [2024-10-08 18:29:05.158596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:52.156 [2024-10-08 18:29:05.158665] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:23:52.156 [2024-10-08 18:29:05.158712] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:23:52.156 [2024-10-08 18:29:05.158739] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:52.156 [2024-10-08 18:29:05.158749] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:23:52.157 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:23:52.157 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:23:52.157 Initializing NVMe Controllers 00:23:52.157 18:29:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:23:52.157 18:29:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:52.157 18:29:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:52.157 18:29:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:52.157 00:23:52.157 real 0m1.156s 00:23:52.157 user 0m0.907s 00:23:52.157 sys 0m0.237s 00:23:52.157 18:29:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:52.157 18:29:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:52.157 ************************************ 00:23:52.157 END TEST nvmf_target_disconnect_tc1 00:23:52.157 ************************************ 00:23:52.157 18:29:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:23:52.157 18:29:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:52.157 18:29:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:52.157 18:29:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:52.157 ************************************ 00:23:52.157 START TEST nvmf_target_disconnect_tc2 00:23:52.157 ************************************ 00:23:52.157 18:29:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:23:52.157 18:29:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:23:52.157 18:29:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:23:52.157 18:29:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:52.157 18:29:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:52.157 18:29:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:52.157 18:29:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=3512504 00:23:52.157 18:29:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 3512504 00:23:52.157 18:29:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:23:52.157 18:29:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3512504 ']' 00:23:52.157 18:29:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.157 18:29:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:52.157 18:29:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.157 18:29:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:52.157 18:29:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:52.157 [2024-10-08 18:29:05.316896] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:23:52.157 [2024-10-08 18:29:05.316952] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.416 [2024-10-08 18:29:05.402349] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:52.416 [2024-10-08 18:29:05.489191] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.416 [2024-10-08 18:29:05.489233] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.416 [2024-10-08 18:29:05.489243] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:52.416 [2024-10-08 18:29:05.489267] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:52.417 [2024-10-08 18:29:05.489275] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.417 [2024-10-08 18:29:05.490752] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:23:52.417 [2024-10-08 18:29:05.490853] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:23:52.417 [2024-10-08 18:29:05.490953] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:23:52.417 [2024-10-08 18:29:05.490954] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:23:53.355 18:29:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:53.355 18:29:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:23:53.355 18:29:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:53.355 18:29:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:53.355 18:29:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:53.355 18:29:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.355 18:29:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:53.355 18:29:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.355 18:29:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:53.355 Malloc0 00:23:53.355 18:29:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.355 18:29:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:23:53.355 18:29:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.355 18:29:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:53.355 [2024-10-08 18:29:06.277864] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd9b620/0xda7100) succeed. 00:23:53.355 [2024-10-08 18:29:06.288929] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd9cc60/0xde87a0) succeed. 00:23:53.355 18:29:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.355 18:29:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:53.355 18:29:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.355 18:29:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:53.355 18:29:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.355 18:29:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:53.355 18:29:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.355 18:29:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:53.355 18:29:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.355 18:29:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:53.355 18:29:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.355 18:29:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:53.355 [2024-10-08 18:29:06.441869] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:53.355 18:29:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.355 18:29:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:23:53.355 18:29:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.355 18:29:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:53.355 18:29:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.355 18:29:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3512701 00:23:53.355 18:29:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:23:53.355 18:29:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:23:55.894 18:29:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3512504 00:23:55.894 18:29:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:23:56.831 Write completed with error (sct=0, sc=8) 00:23:56.831 starting I/O failed 00:23:56.831 Read completed with error (sct=0, sc=8) 00:23:56.831 starting I/O failed 00:23:56.831 Write completed with error (sct=0, sc=8) 00:23:56.831 starting I/O failed 00:23:56.831 Read completed with error (sct=0, sc=8) 00:23:56.831 starting I/O failed 00:23:56.831 Write completed with error (sct=0, sc=8) 00:23:56.831 starting I/O failed 00:23:56.831 Read completed with error (sct=0, sc=8) 00:23:56.831 starting I/O failed 00:23:56.831 Read completed with error (sct=0, sc=8) 00:23:56.831 starting I/O failed 00:23:56.831 Read completed with error (sct=0, sc=8) 00:23:56.831 starting I/O failed 00:23:56.831 Read completed with error (sct=0, sc=8) 00:23:56.831 starting I/O failed 00:23:56.831 Write completed with error (sct=0, sc=8) 00:23:56.831 starting I/O failed 00:23:56.831 Read completed with error (sct=0, sc=8) 00:23:56.831 starting I/O failed 00:23:56.831 Read completed with error (sct=0, sc=8) 00:23:56.831 starting I/O failed 00:23:56.831 Write completed with error (sct=0, sc=8) 00:23:56.831 starting I/O failed 00:23:56.831 Read completed with error (sct=0, sc=8) 00:23:56.831 starting I/O failed 00:23:56.831 Read completed with error (sct=0, sc=8) 00:23:56.831 starting I/O failed 00:23:56.831 Read completed with error (sct=0, sc=8) 00:23:56.831 starting I/O failed 00:23:56.831 Write completed with error (sct=0, sc=8) 00:23:56.831 starting I/O failed 00:23:56.831 Read completed with error (sct=0, sc=8) 00:23:56.831 starting I/O failed 00:23:56.831 Read completed with error (sct=0, sc=8) 00:23:56.831 starting I/O failed 00:23:56.831 Read completed with error (sct=0, sc=8) 00:23:56.831 starting I/O failed 00:23:56.831 Read completed with error (sct=0, sc=8) 00:23:56.831 starting I/O failed 00:23:56.831 Read completed with error (sct=0, sc=8) 00:23:56.831 starting I/O failed 00:23:56.831 Write completed with error (sct=0, sc=8) 00:23:56.831 starting I/O failed 00:23:56.831 Read completed with error (sct=0, sc=8) 00:23:56.831 starting I/O failed 00:23:56.831 Write completed with error (sct=0, sc=8) 00:23:56.831 starting I/O failed 00:23:56.831 Write completed with error (sct=0, sc=8) 00:23:56.831 starting I/O failed 00:23:56.831 Read completed with error (sct=0, sc=8) 00:23:56.831 starting I/O failed 00:23:56.831 Read completed with error (sct=0, sc=8) 00:23:56.831 starting I/O failed 00:23:56.831 Write completed with error (sct=0, sc=8) 00:23:56.831 starting I/O failed 00:23:56.831 Write completed with error (sct=0, sc=8) 00:23:56.831 starting I/O failed 00:23:56.831 Read completed with error (sct=0, sc=8) 00:23:56.831 starting I/O failed 00:23:56.831 Write completed with error (sct=0, sc=8) 00:23:56.831 starting I/O failed 00:23:56.831 [2024-10-08 18:29:09.662230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:57.399 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3512504 Killed "${NVMF_APP[@]}" "$@" 00:23:57.399 18:29:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:23:57.399 18:29:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:23:57.399 18:29:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:57.399 18:29:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:57.399 18:29:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:57.399 18:29:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=3513158 00:23:57.399 18:29:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 3513158 00:23:57.399 18:29:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:23:57.399 18:29:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3513158 ']' 00:23:57.399 18:29:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.399 18:29:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:57.399 18:29:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.399 18:29:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:57.399 18:29:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:57.399 [2024-10-08 18:29:10.527034] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:23:57.399 [2024-10-08 18:29:10.527094] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.659 [2024-10-08 18:29:10.616095] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:57.659 Read completed with error (sct=0, sc=8) 00:23:57.659 starting I/O failed 00:23:57.659 Write completed with error (sct=0, sc=8) 00:23:57.659 starting I/O failed 00:23:57.659 Write completed with error (sct=0, sc=8) 00:23:57.659 starting I/O failed 00:23:57.659 Write completed with error (sct=0, sc=8) 00:23:57.659 starting I/O failed 00:23:57.659 Write completed with error (sct=0, sc=8) 00:23:57.659 starting I/O failed 00:23:57.659 Write completed with error (sct=0, sc=8) 00:23:57.659 starting I/O failed 00:23:57.659 Write completed with error (sct=0, sc=8) 00:23:57.659 starting I/O failed 00:23:57.659 Write completed with error (sct=0, sc=8) 00:23:57.659 starting I/O failed 00:23:57.659 Write completed with error (sct=0, sc=8) 00:23:57.659 starting I/O failed 00:23:57.659 Write completed with error (sct=0, sc=8) 00:23:57.659 starting I/O failed 00:23:57.659 Read completed with error (sct=0, sc=8) 00:23:57.659 starting I/O failed 00:23:57.659 Write completed with error (sct=0, sc=8) 00:23:57.659 starting I/O failed 00:23:57.659 Read completed with error (sct=0, sc=8) 00:23:57.659 starting I/O failed 00:23:57.659 Write completed with error (sct=0, sc=8) 00:23:57.659 starting I/O failed 00:23:57.659 Read completed with error (sct=0, sc=8) 00:23:57.659 starting I/O failed 00:23:57.659 Write completed with error (sct=0, sc=8) 00:23:57.659 starting I/O failed 00:23:57.659 Read completed with error (sct=0, sc=8) 00:23:57.659 starting I/O failed 00:23:57.659 Write completed with error (sct=0, sc=8) 00:23:57.659 starting I/O failed 00:23:57.659 Write completed with error (sct=0, sc=8) 00:23:57.659 starting I/O failed 00:23:57.659 Read completed with error (sct=0, sc=8) 00:23:57.659 starting I/O failed 00:23:57.659 Write completed with error (sct=0, sc=8) 00:23:57.659 starting I/O failed 00:23:57.659 Read completed with error (sct=0, sc=8) 00:23:57.659 starting I/O failed 00:23:57.659 Read completed with error (sct=0, sc=8) 00:23:57.659 starting I/O failed 00:23:57.659 Read completed with error (sct=0, sc=8) 00:23:57.659 starting I/O failed 00:23:57.659 Write completed with error (sct=0, sc=8) 00:23:57.659 starting I/O failed 00:23:57.659 Write completed with error (sct=0, sc=8) 00:23:57.659 starting I/O failed 00:23:57.659 Write completed with error (sct=0, sc=8) 00:23:57.659 starting I/O failed 00:23:57.659 Write completed with error (sct=0, sc=8) 00:23:57.659 starting I/O failed 00:23:57.659 Write completed with error (sct=0, sc=8) 00:23:57.659 starting I/O failed 00:23:57.659 Read completed with error (sct=0, sc=8) 00:23:57.659 starting I/O failed 00:23:57.659 Write completed with error (sct=0, sc=8) 00:23:57.659 starting I/O failed 00:23:57.659 Read completed with error (sct=0, sc=8) 00:23:57.659 starting I/O failed 00:23:57.659 [2024-10-08 18:29:10.667247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.659 [2024-10-08 18:29:10.702238] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:57.659 [2024-10-08 18:29:10.702276] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:57.659 [2024-10-08 18:29:10.702286] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:57.659 [2024-10-08 18:29:10.702311] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:57.659 [2024-10-08 18:29:10.702319] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:57.659 [2024-10-08 18:29:10.703744] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:23:57.659 [2024-10-08 18:29:10.703844] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:23:57.659 [2024-10-08 18:29:10.703954] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:23:57.659 [2024-10-08 18:29:10.703956] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:23:58.226 18:29:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:58.226 18:29:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:23:58.226 18:29:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:58.226 18:29:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:58.226 18:29:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:58.486 18:29:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:58.486 18:29:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:58.486 18:29:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.486 18:29:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:58.486 Malloc0 00:23:58.486 18:29:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.486 18:29:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:23:58.486 18:29:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.486 18:29:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:58.486 [2024-10-08 18:29:11.478348] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6b1620/0x6bd100) succeed. 00:23:58.486 [2024-10-08 18:29:11.489395] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6b2c60/0x6fe7a0) succeed. 00:23:58.486 18:29:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.486 18:29:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:58.486 18:29:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.486 18:29:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:58.486 18:29:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.486 18:29:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:58.486 18:29:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.486 18:29:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:58.487 18:29:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.487 18:29:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:58.487 18:29:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.487 18:29:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:58.487 [2024-10-08 18:29:11.641497] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:58.487 18:29:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.487 18:29:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:23:58.487 18:29:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.487 18:29:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:58.487 18:29:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.487 18:29:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3512701 00:23:58.746 Read completed with error (sct=0, sc=8) 00:23:58.746 starting I/O failed 00:23:58.746 Read completed with error (sct=0, sc=8) 00:23:58.746 starting I/O failed 00:23:58.746 Read completed with error (sct=0, sc=8) 00:23:58.746 starting I/O failed 00:23:58.746 Write completed with error (sct=0, sc=8) 00:23:58.746 starting I/O failed 00:23:58.746 Read completed with error (sct=0, sc=8) 00:23:58.746 starting I/O failed 00:23:58.746 Write completed with error (sct=0, sc=8) 00:23:58.746 starting I/O failed 00:23:58.746 Write completed with error (sct=0, sc=8) 00:23:58.746 starting I/O failed 00:23:58.746 Write completed with error (sct=0, sc=8) 00:23:58.746 starting I/O failed 00:23:58.746 Write completed with error (sct=0, sc=8) 00:23:58.746 starting I/O failed 00:23:58.746 Read completed with error (sct=0, sc=8) 00:23:58.746 starting I/O failed 00:23:58.746 Write completed with error (sct=0, sc=8) 00:23:58.746 starting I/O failed 00:23:58.746 Write completed with error (sct=0, sc=8) 00:23:58.746 starting I/O failed 00:23:58.746 Write completed with error (sct=0, sc=8) 00:23:58.746 starting I/O failed 00:23:58.746 Read completed with error (sct=0, sc=8) 00:23:58.746 starting I/O failed 00:23:58.746 Write completed with error (sct=0, sc=8) 00:23:58.746 starting I/O failed 00:23:58.746 Read completed with error (sct=0, sc=8) 00:23:58.746 starting I/O failed 00:23:58.746 Write completed with error (sct=0, sc=8) 00:23:58.746 starting I/O failed 00:23:58.746 Write completed with error (sct=0, sc=8) 00:23:58.746 starting I/O failed 00:23:58.746 Read completed with error (sct=0, sc=8) 00:23:58.746 starting I/O failed 00:23:58.746 Write completed with error (sct=0, sc=8) 00:23:58.746 starting I/O failed 00:23:58.746 Read completed with error (sct=0, sc=8) 00:23:58.746 starting I/O failed 00:23:58.746 Read completed with error (sct=0, sc=8) 00:23:58.746 starting I/O failed 00:23:58.746 Write completed with error (sct=0, sc=8) 00:23:58.746 starting I/O failed 00:23:58.746 Read completed with error (sct=0, sc=8) 00:23:58.746 starting I/O failed 00:23:58.746 Write completed with error (sct=0, sc=8) 00:23:58.746 starting I/O failed 00:23:58.746 Write completed with error (sct=0, sc=8) 00:23:58.746 starting I/O failed 00:23:58.746 Write completed with error (sct=0, sc=8) 00:23:58.746 starting I/O failed 00:23:58.746 Read completed with error (sct=0, sc=8) 00:23:58.746 starting I/O failed 00:23:58.746 Read completed with error (sct=0, sc=8) 00:23:58.746 starting I/O failed 00:23:58.746 Read completed with error (sct=0, sc=8) 00:23:58.746 starting I/O failed 00:23:58.746 Write completed with error (sct=0, sc=8) 00:23:58.746 starting I/O failed 00:23:58.746 Read completed with error (sct=0, sc=8) 00:23:58.746 starting I/O failed 00:23:58.746 [2024-10-08 18:29:11.672333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:58.746 [2024-10-08 18:29:11.685524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.746 [2024-10-08 18:29:11.685581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.746 [2024-10-08 18:29:11.685605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.746 [2024-10-08 18:29:11.685616] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.746 [2024-10-08 18:29:11.685626] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:58.746 [2024-10-08 18:29:11.695537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:58.746 qpair failed and we were unable to recover it. 00:23:58.746 [2024-10-08 18:29:11.705437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.746 [2024-10-08 18:29:11.705488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.746 [2024-10-08 18:29:11.705509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.746 [2024-10-08 18:29:11.705519] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.746 [2024-10-08 18:29:11.705528] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:58.746 [2024-10-08 18:29:11.715753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:58.746 qpair failed and we were unable to recover it. 00:23:58.746 [2024-10-08 18:29:11.725487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.746 [2024-10-08 18:29:11.725528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.746 [2024-10-08 18:29:11.725548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.746 [2024-10-08 18:29:11.725558] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.746 [2024-10-08 18:29:11.725566] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:58.746 [2024-10-08 18:29:11.735681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:58.746 qpair failed and we were unable to recover it. 00:23:58.746 [2024-10-08 18:29:11.745544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.746 [2024-10-08 18:29:11.745604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.746 [2024-10-08 18:29:11.745624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.746 [2024-10-08 18:29:11.745633] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.746 [2024-10-08 18:29:11.745642] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:58.746 [2024-10-08 18:29:11.755729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:58.746 qpair failed and we were unable to recover it. 00:23:58.746 [2024-10-08 18:29:11.765737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.746 [2024-10-08 18:29:11.765785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.746 [2024-10-08 18:29:11.765803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.746 [2024-10-08 18:29:11.765813] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.746 [2024-10-08 18:29:11.765822] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:58.746 [2024-10-08 18:29:11.775820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:58.746 qpair failed and we were unable to recover it. 00:23:58.746 [2024-10-08 18:29:11.785628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.746 [2024-10-08 18:29:11.785676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.746 [2024-10-08 18:29:11.785694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.746 [2024-10-08 18:29:11.785704] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.746 [2024-10-08 18:29:11.785713] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:58.746 [2024-10-08 18:29:11.796013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:58.746 qpair failed and we were unable to recover it. 00:23:58.746 [2024-10-08 18:29:11.805705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.746 [2024-10-08 18:29:11.805750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.746 [2024-10-08 18:29:11.805768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.746 [2024-10-08 18:29:11.805778] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.746 [2024-10-08 18:29:11.805787] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:58.746 [2024-10-08 18:29:11.816099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:58.746 qpair failed and we were unable to recover it. 00:23:58.746 [2024-10-08 18:29:11.825786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.747 [2024-10-08 18:29:11.825831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.747 [2024-10-08 18:29:11.825854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.747 [2024-10-08 18:29:11.825863] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.747 [2024-10-08 18:29:11.825872] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:58.747 [2024-10-08 18:29:11.836045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:58.747 qpair failed and we were unable to recover it. 00:23:58.747 [2024-10-08 18:29:11.845839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.747 [2024-10-08 18:29:11.845884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.747 [2024-10-08 18:29:11.845903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.747 [2024-10-08 18:29:11.845913] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.747 [2024-10-08 18:29:11.845921] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:58.747 [2024-10-08 18:29:11.856232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:58.747 qpair failed and we were unable to recover it. 00:23:58.747 [2024-10-08 18:29:11.865922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.747 [2024-10-08 18:29:11.865966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.747 [2024-10-08 18:29:11.865985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.747 [2024-10-08 18:29:11.865995] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.747 [2024-10-08 18:29:11.866018] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:58.747 [2024-10-08 18:29:11.876113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:58.747 qpair failed and we were unable to recover it. 00:23:58.747 [2024-10-08 18:29:11.885949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.747 [2024-10-08 18:29:11.885988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.747 [2024-10-08 18:29:11.886012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.747 [2024-10-08 18:29:11.886022] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.747 [2024-10-08 18:29:11.886030] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:58.747 [2024-10-08 18:29:11.896085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:58.747 qpair failed and we were unable to recover it. 00:23:58.747 [2024-10-08 18:29:11.905988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.747 [2024-10-08 18:29:11.906043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.747 [2024-10-08 18:29:11.906061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.747 [2024-10-08 18:29:11.906071] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.747 [2024-10-08 18:29:11.906083] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:58.747 [2024-10-08 18:29:11.916194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:58.747 qpair failed and we were unable to recover it. 00:23:59.007 [2024-10-08 18:29:11.926066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.007 [2024-10-08 18:29:11.926112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.007 [2024-10-08 18:29:11.926131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.007 [2024-10-08 18:29:11.926140] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.007 [2024-10-08 18:29:11.926148] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.007 [2024-10-08 18:29:11.936551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.007 qpair failed and we were unable to recover it. 00:23:59.007 [2024-10-08 18:29:11.946087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.007 [2024-10-08 18:29:11.946132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.007 [2024-10-08 18:29:11.946150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.007 [2024-10-08 18:29:11.946160] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.007 [2024-10-08 18:29:11.946169] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.007 [2024-10-08 18:29:11.956361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.007 qpair failed and we were unable to recover it. 00:23:59.007 [2024-10-08 18:29:11.966254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.007 [2024-10-08 18:29:11.966291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.007 [2024-10-08 18:29:11.966309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.007 [2024-10-08 18:29:11.966318] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.007 [2024-10-08 18:29:11.966327] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.007 [2024-10-08 18:29:11.976426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.007 qpair failed and we were unable to recover it. 00:23:59.007 [2024-10-08 18:29:11.986354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.007 [2024-10-08 18:29:11.986396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.007 [2024-10-08 18:29:11.986414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.007 [2024-10-08 18:29:11.986424] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.007 [2024-10-08 18:29:11.986433] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.007 [2024-10-08 18:29:11.996560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.007 qpair failed and we were unable to recover it. 00:23:59.007 [2024-10-08 18:29:12.006378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.007 [2024-10-08 18:29:12.006427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.007 [2024-10-08 18:29:12.006446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.007 [2024-10-08 18:29:12.006456] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.007 [2024-10-08 18:29:12.006465] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.007 [2024-10-08 18:29:12.016516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.007 qpair failed and we were unable to recover it. 00:23:59.007 [2024-10-08 18:29:12.026383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.007 [2024-10-08 18:29:12.026424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.007 [2024-10-08 18:29:12.026442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.007 [2024-10-08 18:29:12.026452] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.007 [2024-10-08 18:29:12.026460] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.007 [2024-10-08 18:29:12.036634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.007 qpair failed and we were unable to recover it. 00:23:59.007 [2024-10-08 18:29:12.046478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.007 [2024-10-08 18:29:12.046519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.007 [2024-10-08 18:29:12.046537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.007 [2024-10-08 18:29:12.046547] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.007 [2024-10-08 18:29:12.046555] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.007 [2024-10-08 18:29:12.056758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.007 qpair failed and we were unable to recover it. 00:23:59.007 [2024-10-08 18:29:12.066528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.007 [2024-10-08 18:29:12.066570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.007 [2024-10-08 18:29:12.066589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.007 [2024-10-08 18:29:12.066598] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.007 [2024-10-08 18:29:12.066607] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.007 [2024-10-08 18:29:12.076788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.007 qpair failed and we were unable to recover it. 00:23:59.007 [2024-10-08 18:29:12.086518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.007 [2024-10-08 18:29:12.086562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.007 [2024-10-08 18:29:12.086580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.007 [2024-10-08 18:29:12.086593] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.007 [2024-10-08 18:29:12.086601] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.007 [2024-10-08 18:29:12.096708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.007 qpair failed and we were unable to recover it. 00:23:59.007 [2024-10-08 18:29:12.106570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.007 [2024-10-08 18:29:12.106613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.007 [2024-10-08 18:29:12.106632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.007 [2024-10-08 18:29:12.106642] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.007 [2024-10-08 18:29:12.106650] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.007 [2024-10-08 18:29:12.116813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.007 qpair failed and we were unable to recover it. 00:23:59.007 [2024-10-08 18:29:12.126695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.007 [2024-10-08 18:29:12.126732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.007 [2024-10-08 18:29:12.126750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.007 [2024-10-08 18:29:12.126760] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.007 [2024-10-08 18:29:12.126768] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.007 [2024-10-08 18:29:12.136933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.007 qpair failed and we were unable to recover it. 00:23:59.007 [2024-10-08 18:29:12.146737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.007 [2024-10-08 18:29:12.146778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.007 [2024-10-08 18:29:12.146796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.007 [2024-10-08 18:29:12.146806] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.007 [2024-10-08 18:29:12.146814] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.008 [2024-10-08 18:29:12.157003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.008 qpair failed and we were unable to recover it. 00:23:59.008 [2024-10-08 18:29:12.166834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.008 [2024-10-08 18:29:12.166880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.008 [2024-10-08 18:29:12.166900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.008 [2024-10-08 18:29:12.166910] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.008 [2024-10-08 18:29:12.166918] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.008 [2024-10-08 18:29:12.177028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.008 qpair failed and we were unable to recover it. 00:23:59.267 [2024-10-08 18:29:12.186836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.267 [2024-10-08 18:29:12.186882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.267 [2024-10-08 18:29:12.186902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.267 [2024-10-08 18:29:12.186912] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.267 [2024-10-08 18:29:12.186920] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.267 [2024-10-08 18:29:12.197122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.267 qpair failed and we were unable to recover it. 00:23:59.267 [2024-10-08 18:29:12.206926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.267 [2024-10-08 18:29:12.206962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.267 [2024-10-08 18:29:12.206980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.267 [2024-10-08 18:29:12.206989] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.267 [2024-10-08 18:29:12.207004] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.267 [2024-10-08 18:29:12.217075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.267 qpair failed and we were unable to recover it. 00:23:59.267 [2024-10-08 18:29:12.226890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.267 [2024-10-08 18:29:12.226936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.267 [2024-10-08 18:29:12.226954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.267 [2024-10-08 18:29:12.226964] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.267 [2024-10-08 18:29:12.226972] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.267 [2024-10-08 18:29:12.237101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.267 qpair failed and we were unable to recover it. 00:23:59.267 [2024-10-08 18:29:12.247080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.267 [2024-10-08 18:29:12.247124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.267 [2024-10-08 18:29:12.247143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.267 [2024-10-08 18:29:12.247153] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.267 [2024-10-08 18:29:12.247162] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.267 [2024-10-08 18:29:12.257309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.267 qpair failed and we were unable to recover it. 00:23:59.267 [2024-10-08 18:29:12.267054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.267 [2024-10-08 18:29:12.267094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.268 [2024-10-08 18:29:12.267115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.268 [2024-10-08 18:29:12.267125] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.268 [2024-10-08 18:29:12.267133] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.268 [2024-10-08 18:29:12.277422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.268 qpair failed and we were unable to recover it. 00:23:59.268 [2024-10-08 18:29:12.287211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.268 [2024-10-08 18:29:12.287246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.268 [2024-10-08 18:29:12.287265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.268 [2024-10-08 18:29:12.287274] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.268 [2024-10-08 18:29:12.287283] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.268 [2024-10-08 18:29:12.297402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.268 qpair failed and we were unable to recover it. 00:23:59.268 [2024-10-08 18:29:12.307253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.268 [2024-10-08 18:29:12.307295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.268 [2024-10-08 18:29:12.307314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.268 [2024-10-08 18:29:12.307323] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.268 [2024-10-08 18:29:12.307332] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.268 [2024-10-08 18:29:12.317502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.268 qpair failed and we were unable to recover it. 00:23:59.268 [2024-10-08 18:29:12.327247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.268 [2024-10-08 18:29:12.327287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.268 [2024-10-08 18:29:12.327306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.268 [2024-10-08 18:29:12.327315] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.268 [2024-10-08 18:29:12.327324] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.268 [2024-10-08 18:29:12.337469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.268 qpair failed and we were unable to recover it. 00:23:59.268 [2024-10-08 18:29:12.347274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.268 [2024-10-08 18:29:12.347314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.268 [2024-10-08 18:29:12.347332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.268 [2024-10-08 18:29:12.347342] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.268 [2024-10-08 18:29:12.347354] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.268 [2024-10-08 18:29:12.357536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.268 qpair failed and we were unable to recover it. 00:23:59.268 [2024-10-08 18:29:12.367271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.268 [2024-10-08 18:29:12.367311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.268 [2024-10-08 18:29:12.367329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.268 [2024-10-08 18:29:12.367339] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.268 [2024-10-08 18:29:12.367348] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.268 [2024-10-08 18:29:12.377761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.268 qpair failed and we were unable to recover it. 00:23:59.268 [2024-10-08 18:29:12.387360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.268 [2024-10-08 18:29:12.387402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.268 [2024-10-08 18:29:12.387420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.268 [2024-10-08 18:29:12.387430] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.268 [2024-10-08 18:29:12.387439] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.268 [2024-10-08 18:29:12.397778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.268 qpair failed and we were unable to recover it. 00:23:59.268 [2024-10-08 18:29:12.407434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.268 [2024-10-08 18:29:12.407479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.268 [2024-10-08 18:29:12.407497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.268 [2024-10-08 18:29:12.407507] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.268 [2024-10-08 18:29:12.407516] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.268 [2024-10-08 18:29:12.417805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.268 qpair failed and we were unable to recover it. 00:23:59.268 [2024-10-08 18:29:12.427465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.268 [2024-10-08 18:29:12.427506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.268 [2024-10-08 18:29:12.427525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.268 [2024-10-08 18:29:12.427534] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.268 [2024-10-08 18:29:12.427542] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.268 [2024-10-08 18:29:12.437798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.268 qpair failed and we were unable to recover it. 00:23:59.528 [2024-10-08 18:29:12.447515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.528 [2024-10-08 18:29:12.447561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.528 [2024-10-08 18:29:12.447580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.528 [2024-10-08 18:29:12.447589] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.528 [2024-10-08 18:29:12.447598] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.528 [2024-10-08 18:29:12.457845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.528 qpair failed and we were unable to recover it. 00:23:59.528 [2024-10-08 18:29:12.467619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.528 [2024-10-08 18:29:12.467661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.528 [2024-10-08 18:29:12.467679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.528 [2024-10-08 18:29:12.467689] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.528 [2024-10-08 18:29:12.467697] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.528 [2024-10-08 18:29:12.477797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.528 qpair failed and we were unable to recover it. 00:23:59.528 [2024-10-08 18:29:12.487750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.528 [2024-10-08 18:29:12.487792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.528 [2024-10-08 18:29:12.487810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.528 [2024-10-08 18:29:12.487820] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.528 [2024-10-08 18:29:12.487829] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.528 [2024-10-08 18:29:12.497972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.528 qpair failed and we were unable to recover it. 00:23:59.528 [2024-10-08 18:29:12.507643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.528 [2024-10-08 18:29:12.507684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.528 [2024-10-08 18:29:12.507702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.528 [2024-10-08 18:29:12.507712] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.528 [2024-10-08 18:29:12.507721] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.528 [2024-10-08 18:29:12.517987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.528 qpair failed and we were unable to recover it. 00:23:59.528 [2024-10-08 18:29:12.527719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.528 [2024-10-08 18:29:12.527756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.528 [2024-10-08 18:29:12.527774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.528 [2024-10-08 18:29:12.527787] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.528 [2024-10-08 18:29:12.527796] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.528 [2024-10-08 18:29:12.538041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.528 qpair failed and we were unable to recover it. 00:23:59.528 [2024-10-08 18:29:12.547814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.528 [2024-10-08 18:29:12.547856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.528 [2024-10-08 18:29:12.547875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.528 [2024-10-08 18:29:12.547884] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.528 [2024-10-08 18:29:12.547893] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.528 [2024-10-08 18:29:12.558150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.528 qpair failed and we were unable to recover it. 00:23:59.528 [2024-10-08 18:29:12.567914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.528 [2024-10-08 18:29:12.567953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.528 [2024-10-08 18:29:12.567972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.528 [2024-10-08 18:29:12.567982] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.528 [2024-10-08 18:29:12.567990] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.528 [2024-10-08 18:29:12.578638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.528 qpair failed and we were unable to recover it. 00:23:59.528 [2024-10-08 18:29:12.588015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.528 [2024-10-08 18:29:12.588052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.528 [2024-10-08 18:29:12.588070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.528 [2024-10-08 18:29:12.588080] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.528 [2024-10-08 18:29:12.588088] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.528 [2024-10-08 18:29:12.598251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.528 qpair failed and we were unable to recover it. 00:23:59.529 [2024-10-08 18:29:12.608022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.529 [2024-10-08 18:29:12.608060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.529 [2024-10-08 18:29:12.608079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.529 [2024-10-08 18:29:12.608088] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.529 [2024-10-08 18:29:12.608097] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.529 [2024-10-08 18:29:12.618484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.529 qpair failed and we were unable to recover it. 00:23:59.529 [2024-10-08 18:29:12.628134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.529 [2024-10-08 18:29:12.628178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.529 [2024-10-08 18:29:12.628196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.529 [2024-10-08 18:29:12.628206] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.529 [2024-10-08 18:29:12.628215] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.529 [2024-10-08 18:29:12.638527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.529 qpair failed and we were unable to recover it. 00:23:59.529 [2024-10-08 18:29:12.648141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.529 [2024-10-08 18:29:12.648187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.529 [2024-10-08 18:29:12.648205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.529 [2024-10-08 18:29:12.648215] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.529 [2024-10-08 18:29:12.648223] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.529 [2024-10-08 18:29:12.658507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.529 qpair failed and we were unable to recover it. 00:23:59.529 [2024-10-08 18:29:12.668188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.529 [2024-10-08 18:29:12.668229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.529 [2024-10-08 18:29:12.668247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.529 [2024-10-08 18:29:12.668256] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.529 [2024-10-08 18:29:12.668265] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.529 [2024-10-08 18:29:12.678442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.529 qpair failed and we were unable to recover it. 00:23:59.529 [2024-10-08 18:29:12.688219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.529 [2024-10-08 18:29:12.688259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.529 [2024-10-08 18:29:12.688278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.529 [2024-10-08 18:29:12.688287] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.529 [2024-10-08 18:29:12.688296] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.529 [2024-10-08 18:29:12.698528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.529 qpair failed and we were unable to recover it. 00:23:59.788 [2024-10-08 18:29:12.708347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.788 [2024-10-08 18:29:12.708388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.788 [2024-10-08 18:29:12.708409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.788 [2024-10-08 18:29:12.708419] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.788 [2024-10-08 18:29:12.708428] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.789 [2024-10-08 18:29:12.718476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.789 qpair failed and we were unable to recover it. 00:23:59.789 [2024-10-08 18:29:12.728427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.789 [2024-10-08 18:29:12.728472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.789 [2024-10-08 18:29:12.728490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.789 [2024-10-08 18:29:12.728499] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.789 [2024-10-08 18:29:12.728508] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.789 [2024-10-08 18:29:12.738630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.789 qpair failed and we were unable to recover it. 00:23:59.789 [2024-10-08 18:29:12.748465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.789 [2024-10-08 18:29:12.748501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.789 [2024-10-08 18:29:12.748520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.789 [2024-10-08 18:29:12.748530] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.789 [2024-10-08 18:29:12.748539] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.789 [2024-10-08 18:29:12.758752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.789 qpair failed and we were unable to recover it. 00:23:59.789 [2024-10-08 18:29:12.768515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.789 [2024-10-08 18:29:12.768555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.789 [2024-10-08 18:29:12.768573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.789 [2024-10-08 18:29:12.768583] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.789 [2024-10-08 18:29:12.768592] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.789 [2024-10-08 18:29:12.778879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.789 qpair failed and we were unable to recover it. 00:23:59.789 [2024-10-08 18:29:12.788551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.789 [2024-10-08 18:29:12.788595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.789 [2024-10-08 18:29:12.788613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.789 [2024-10-08 18:29:12.788623] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.789 [2024-10-08 18:29:12.788632] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.789 [2024-10-08 18:29:12.798742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.789 qpair failed and we were unable to recover it. 00:23:59.789 [2024-10-08 18:29:12.808699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.789 [2024-10-08 18:29:12.808744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.789 [2024-10-08 18:29:12.808763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.789 [2024-10-08 18:29:12.808772] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.789 [2024-10-08 18:29:12.808781] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.789 [2024-10-08 18:29:12.818902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.789 qpair failed and we were unable to recover it. 00:23:59.789 [2024-10-08 18:29:12.828710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.789 [2024-10-08 18:29:12.828754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.789 [2024-10-08 18:29:12.828772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.789 [2024-10-08 18:29:12.828782] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.789 [2024-10-08 18:29:12.828790] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.789 [2024-10-08 18:29:12.838861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.789 qpair failed and we were unable to recover it. 00:23:59.789 [2024-10-08 18:29:12.848712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.789 [2024-10-08 18:29:12.848747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.789 [2024-10-08 18:29:12.848766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.789 [2024-10-08 18:29:12.848775] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.789 [2024-10-08 18:29:12.848784] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.789 [2024-10-08 18:29:12.859042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.789 qpair failed and we were unable to recover it. 00:23:59.789 [2024-10-08 18:29:12.868845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.789 [2024-10-08 18:29:12.868887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.789 [2024-10-08 18:29:12.868906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.789 [2024-10-08 18:29:12.868916] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.789 [2024-10-08 18:29:12.868924] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.789 [2024-10-08 18:29:12.879080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.789 qpair failed and we were unable to recover it. 00:23:59.789 [2024-10-08 18:29:12.888878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.789 [2024-10-08 18:29:12.888924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.789 [2024-10-08 18:29:12.888942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.789 [2024-10-08 18:29:12.888952] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.789 [2024-10-08 18:29:12.888961] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.789 [2024-10-08 18:29:12.899127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.789 qpair failed and we were unable to recover it. 00:23:59.789 [2024-10-08 18:29:12.908981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.789 [2024-10-08 18:29:12.909033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.789 [2024-10-08 18:29:12.909051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.789 [2024-10-08 18:29:12.909061] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.789 [2024-10-08 18:29:12.909069] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.789 [2024-10-08 18:29:12.919135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.789 qpair failed and we were unable to recover it. 00:23:59.789 [2024-10-08 18:29:12.928995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.789 [2024-10-08 18:29:12.929040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.789 [2024-10-08 18:29:12.929058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.789 [2024-10-08 18:29:12.929068] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.789 [2024-10-08 18:29:12.929077] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.789 [2024-10-08 18:29:12.939268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.789 qpair failed and we were unable to recover it. 00:23:59.789 [2024-10-08 18:29:12.949051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.789 [2024-10-08 18:29:12.949094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.789 [2024-10-08 18:29:12.949112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.789 [2024-10-08 18:29:12.949122] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.789 [2024-10-08 18:29:12.949130] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:23:59.789 [2024-10-08 18:29:12.959279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.789 qpair failed and we were unable to recover it. 00:24:00.049 [2024-10-08 18:29:12.969217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.049 [2024-10-08 18:29:12.969265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.050 [2024-10-08 18:29:12.969283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.050 [2024-10-08 18:29:12.969296] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.050 [2024-10-08 18:29:12.969305] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.050 [2024-10-08 18:29:12.979471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.050 qpair failed and we were unable to recover it. 00:24:00.050 [2024-10-08 18:29:12.989193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.050 [2024-10-08 18:29:12.989233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.050 [2024-10-08 18:29:12.989252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.050 [2024-10-08 18:29:12.989262] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.050 [2024-10-08 18:29:12.989270] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.050 [2024-10-08 18:29:12.999596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.050 qpair failed and we were unable to recover it. 00:24:00.050 [2024-10-08 18:29:13.009322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.050 [2024-10-08 18:29:13.009365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.050 [2024-10-08 18:29:13.009384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.050 [2024-10-08 18:29:13.009393] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.050 [2024-10-08 18:29:13.009402] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.050 [2024-10-08 18:29:13.019428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.050 qpair failed and we were unable to recover it. 00:24:00.050 [2024-10-08 18:29:13.029247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.050 [2024-10-08 18:29:13.029289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.050 [2024-10-08 18:29:13.029308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.050 [2024-10-08 18:29:13.029317] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.050 [2024-10-08 18:29:13.029326] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.050 [2024-10-08 18:29:13.039412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.050 qpair failed and we were unable to recover it. 00:24:00.050 [2024-10-08 18:29:13.049363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.050 [2024-10-08 18:29:13.049407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.050 [2024-10-08 18:29:13.049425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.050 [2024-10-08 18:29:13.049434] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.050 [2024-10-08 18:29:13.049443] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.050 [2024-10-08 18:29:13.059627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.050 qpair failed and we were unable to recover it. 00:24:00.050 [2024-10-08 18:29:13.069356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.050 [2024-10-08 18:29:13.069394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.050 [2024-10-08 18:29:13.069412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.050 [2024-10-08 18:29:13.069421] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.050 [2024-10-08 18:29:13.069430] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.050 [2024-10-08 18:29:13.079712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.050 qpair failed and we were unable to recover it. 00:24:00.050 [2024-10-08 18:29:13.089543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.050 [2024-10-08 18:29:13.089580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.050 [2024-10-08 18:29:13.089599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.050 [2024-10-08 18:29:13.089608] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.050 [2024-10-08 18:29:13.089617] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.050 [2024-10-08 18:29:13.099798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.050 qpair failed and we were unable to recover it. 00:24:00.050 [2024-10-08 18:29:13.109465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.050 [2024-10-08 18:29:13.109508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.050 [2024-10-08 18:29:13.109526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.050 [2024-10-08 18:29:13.109536] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.050 [2024-10-08 18:29:13.109545] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.050 [2024-10-08 18:29:13.119832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.050 qpair failed and we were unable to recover it. 00:24:00.050 [2024-10-08 18:29:13.129533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.050 [2024-10-08 18:29:13.129581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.050 [2024-10-08 18:29:13.129601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.050 [2024-10-08 18:29:13.129611] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.050 [2024-10-08 18:29:13.129619] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.050 [2024-10-08 18:29:13.139844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.050 qpair failed and we were unable to recover it. 00:24:00.050 [2024-10-08 18:29:13.149657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.050 [2024-10-08 18:29:13.149702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.050 [2024-10-08 18:29:13.149724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.050 [2024-10-08 18:29:13.149735] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.050 [2024-10-08 18:29:13.149745] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.050 [2024-10-08 18:29:13.159957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.050 qpair failed and we were unable to recover it. 00:24:00.050 [2024-10-08 18:29:13.169765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.050 [2024-10-08 18:29:13.169803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.050 [2024-10-08 18:29:13.169822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.050 [2024-10-08 18:29:13.169831] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.050 [2024-10-08 18:29:13.169840] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.050 [2024-10-08 18:29:13.179990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.050 qpair failed and we were unable to recover it. 00:24:00.050 [2024-10-08 18:29:13.189686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.050 [2024-10-08 18:29:13.189730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.050 [2024-10-08 18:29:13.189748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.050 [2024-10-08 18:29:13.189758] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.050 [2024-10-08 18:29:13.189766] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.050 [2024-10-08 18:29:13.199876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.050 qpair failed and we were unable to recover it. 00:24:00.050 [2024-10-08 18:29:13.209782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.050 [2024-10-08 18:29:13.209826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.050 [2024-10-08 18:29:13.209844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.050 [2024-10-08 18:29:13.209854] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.050 [2024-10-08 18:29:13.209863] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.050 [2024-10-08 18:29:13.220544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.050 qpair failed and we were unable to recover it. 00:24:00.310 [2024-10-08 18:29:13.229832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.310 [2024-10-08 18:29:13.229879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.310 [2024-10-08 18:29:13.229897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.310 [2024-10-08 18:29:13.229908] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.310 [2024-10-08 18:29:13.229918] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.310 [2024-10-08 18:29:13.240039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.310 qpair failed and we were unable to recover it. 00:24:00.310 [2024-10-08 18:29:13.249862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.310 [2024-10-08 18:29:13.249906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.311 [2024-10-08 18:29:13.249925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.311 [2024-10-08 18:29:13.249936] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.311 [2024-10-08 18:29:13.249944] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.311 [2024-10-08 18:29:13.260432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.311 qpair failed and we were unable to recover it. 00:24:00.311 [2024-10-08 18:29:13.269886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.311 [2024-10-08 18:29:13.269929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.311 [2024-10-08 18:29:13.269947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.311 [2024-10-08 18:29:13.269957] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.311 [2024-10-08 18:29:13.269967] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.311 [2024-10-08 18:29:13.280366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.311 qpair failed and we were unable to recover it. 00:24:00.311 [2024-10-08 18:29:13.290044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.311 [2024-10-08 18:29:13.290092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.311 [2024-10-08 18:29:13.290111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.311 [2024-10-08 18:29:13.290120] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.311 [2024-10-08 18:29:13.290129] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.311 [2024-10-08 18:29:13.300339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.311 qpair failed and we were unable to recover it. 00:24:00.311 [2024-10-08 18:29:13.310102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.311 [2024-10-08 18:29:13.310144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.311 [2024-10-08 18:29:13.310163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.311 [2024-10-08 18:29:13.310172] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.311 [2024-10-08 18:29:13.310181] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.311 [2024-10-08 18:29:13.320266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.311 qpair failed and we were unable to recover it. 00:24:00.311 [2024-10-08 18:29:13.330132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.311 [2024-10-08 18:29:13.330180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.311 [2024-10-08 18:29:13.330198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.311 [2024-10-08 18:29:13.330208] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.311 [2024-10-08 18:29:13.330216] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.311 [2024-10-08 18:29:13.340373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.311 qpair failed and we were unable to recover it. 00:24:00.311 [2024-10-08 18:29:13.350217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.311 [2024-10-08 18:29:13.350261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.311 [2024-10-08 18:29:13.350280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.311 [2024-10-08 18:29:13.350290] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.311 [2024-10-08 18:29:13.350298] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.311 [2024-10-08 18:29:13.360488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.311 qpair failed and we were unable to recover it. 00:24:00.311 [2024-10-08 18:29:13.370310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.311 [2024-10-08 18:29:13.370358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.311 [2024-10-08 18:29:13.370377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.311 [2024-10-08 18:29:13.370386] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.311 [2024-10-08 18:29:13.370395] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.311 [2024-10-08 18:29:13.380704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.311 qpair failed and we were unable to recover it. 00:24:00.311 [2024-10-08 18:29:13.390256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.311 [2024-10-08 18:29:13.390295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.311 [2024-10-08 18:29:13.390313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.311 [2024-10-08 18:29:13.390323] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.311 [2024-10-08 18:29:13.390331] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.311 [2024-10-08 18:29:13.400419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.311 qpair failed and we were unable to recover it. 00:24:00.311 [2024-10-08 18:29:13.410390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.311 [2024-10-08 18:29:13.410432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.311 [2024-10-08 18:29:13.410451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.311 [2024-10-08 18:29:13.410460] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.311 [2024-10-08 18:29:13.410472] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.311 [2024-10-08 18:29:13.420704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.311 qpair failed and we were unable to recover it. 00:24:00.311 [2024-10-08 18:29:13.430437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.311 [2024-10-08 18:29:13.430479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.311 [2024-10-08 18:29:13.430499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.311 [2024-10-08 18:29:13.430508] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.311 [2024-10-08 18:29:13.430517] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.311 [2024-10-08 18:29:13.440695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.311 qpair failed and we were unable to recover it. 00:24:00.311 [2024-10-08 18:29:13.450428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.311 [2024-10-08 18:29:13.450475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.311 [2024-10-08 18:29:13.450493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.311 [2024-10-08 18:29:13.450502] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.311 [2024-10-08 18:29:13.450511] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.311 [2024-10-08 18:29:13.460750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.311 qpair failed and we were unable to recover it. 00:24:00.311 [2024-10-08 18:29:13.470731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.311 [2024-10-08 18:29:13.470776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.311 [2024-10-08 18:29:13.470796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.311 [2024-10-08 18:29:13.470805] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.311 [2024-10-08 18:29:13.470814] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.311 [2024-10-08 18:29:13.480833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.311 qpair failed and we were unable to recover it. 00:24:00.571 [2024-10-08 18:29:13.490606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.571 [2024-10-08 18:29:13.490645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.571 [2024-10-08 18:29:13.490663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.571 [2024-10-08 18:29:13.490673] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.571 [2024-10-08 18:29:13.490681] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.571 [2024-10-08 18:29:13.501013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.571 qpair failed and we were unable to recover it. 00:24:00.571 [2024-10-08 18:29:13.510714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.571 [2024-10-08 18:29:13.510757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.571 [2024-10-08 18:29:13.510775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.571 [2024-10-08 18:29:13.510785] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.571 [2024-10-08 18:29:13.510794] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.571 [2024-10-08 18:29:13.520977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.571 qpair failed and we were unable to recover it. 00:24:00.571 [2024-10-08 18:29:13.530722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.571 [2024-10-08 18:29:13.530767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.571 [2024-10-08 18:29:13.530785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.572 [2024-10-08 18:29:13.530794] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.572 [2024-10-08 18:29:13.530803] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.572 [2024-10-08 18:29:13.541141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.572 qpair failed and we were unable to recover it. 00:24:00.572 [2024-10-08 18:29:13.550768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.572 [2024-10-08 18:29:13.550813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.572 [2024-10-08 18:29:13.550831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.572 [2024-10-08 18:29:13.550841] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.572 [2024-10-08 18:29:13.550850] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.572 [2024-10-08 18:29:13.561330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.572 qpair failed and we were unable to recover it. 00:24:00.572 [2024-10-08 18:29:13.570811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.572 [2024-10-08 18:29:13.570852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.572 [2024-10-08 18:29:13.570870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.572 [2024-10-08 18:29:13.570880] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.572 [2024-10-08 18:29:13.570888] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.572 [2024-10-08 18:29:13.581218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.572 qpair failed and we were unable to recover it. 00:24:00.572 [2024-10-08 18:29:13.590914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.572 [2024-10-08 18:29:13.590955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.572 [2024-10-08 18:29:13.590977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.572 [2024-10-08 18:29:13.590987] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.572 [2024-10-08 18:29:13.590996] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.572 [2024-10-08 18:29:13.601248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.572 qpair failed and we were unable to recover it. 00:24:00.572 [2024-10-08 18:29:13.611081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.572 [2024-10-08 18:29:13.611128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.572 [2024-10-08 18:29:13.611146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.572 [2024-10-08 18:29:13.611156] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.572 [2024-10-08 18:29:13.611164] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.572 [2024-10-08 18:29:13.621294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.572 qpair failed and we were unable to recover it. 00:24:00.572 [2024-10-08 18:29:13.631123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.572 [2024-10-08 18:29:13.631166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.572 [2024-10-08 18:29:13.631184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.572 [2024-10-08 18:29:13.631194] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.572 [2024-10-08 18:29:13.631202] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.572 [2024-10-08 18:29:13.641195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.572 qpair failed and we were unable to recover it. 00:24:00.572 [2024-10-08 18:29:13.651132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.572 [2024-10-08 18:29:13.651169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.572 [2024-10-08 18:29:13.651188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.572 [2024-10-08 18:29:13.651199] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.572 [2024-10-08 18:29:13.651207] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.572 [2024-10-08 18:29:13.661503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.572 qpair failed and we were unable to recover it. 00:24:00.572 [2024-10-08 18:29:13.671188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.572 [2024-10-08 18:29:13.671232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.572 [2024-10-08 18:29:13.671250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.572 [2024-10-08 18:29:13.671259] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.572 [2024-10-08 18:29:13.671268] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.572 [2024-10-08 18:29:13.681550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.572 qpair failed and we were unable to recover it. 00:24:00.572 [2024-10-08 18:29:13.691261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.572 [2024-10-08 18:29:13.691305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.572 [2024-10-08 18:29:13.691323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.572 [2024-10-08 18:29:13.691332] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.572 [2024-10-08 18:29:13.691341] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.572 [2024-10-08 18:29:13.701517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.572 qpair failed and we were unable to recover it. 00:24:00.572 [2024-10-08 18:29:13.711303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.572 [2024-10-08 18:29:13.711346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.572 [2024-10-08 18:29:13.711365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.572 [2024-10-08 18:29:13.711374] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.572 [2024-10-08 18:29:13.711383] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.572 [2024-10-08 18:29:13.721565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.572 qpair failed and we were unable to recover it. 00:24:00.572 [2024-10-08 18:29:13.731417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.572 [2024-10-08 18:29:13.731453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.572 [2024-10-08 18:29:13.731471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.572 [2024-10-08 18:29:13.731481] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.572 [2024-10-08 18:29:13.731489] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.572 [2024-10-08 18:29:13.741537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.572 qpair failed and we were unable to recover it. 00:24:00.832 [2024-10-08 18:29:13.751476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.832 [2024-10-08 18:29:13.751518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.832 [2024-10-08 18:29:13.751537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.832 [2024-10-08 18:29:13.751546] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.832 [2024-10-08 18:29:13.751555] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.832 [2024-10-08 18:29:13.761783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.832 qpair failed and we were unable to recover it. 00:24:00.832 [2024-10-08 18:29:13.771583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.832 [2024-10-08 18:29:13.771627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.832 [2024-10-08 18:29:13.771649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.832 [2024-10-08 18:29:13.771659] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.832 [2024-10-08 18:29:13.771667] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.832 [2024-10-08 18:29:13.781774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.832 qpair failed and we were unable to recover it. 00:24:00.832 [2024-10-08 18:29:13.791601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.832 [2024-10-08 18:29:13.791646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.832 [2024-10-08 18:29:13.791666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.832 [2024-10-08 18:29:13.791677] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.832 [2024-10-08 18:29:13.791687] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.832 [2024-10-08 18:29:13.801876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.832 qpair failed and we were unable to recover it. 00:24:00.832 [2024-10-08 18:29:13.811672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.832 [2024-10-08 18:29:13.811716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.832 [2024-10-08 18:29:13.811734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.832 [2024-10-08 18:29:13.811744] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.832 [2024-10-08 18:29:13.811753] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.832 [2024-10-08 18:29:13.821836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.832 qpair failed and we were unable to recover it. 00:24:00.832 [2024-10-08 18:29:13.831784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.832 [2024-10-08 18:29:13.831823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.832 [2024-10-08 18:29:13.831841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.832 [2024-10-08 18:29:13.831851] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.832 [2024-10-08 18:29:13.831860] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.832 [2024-10-08 18:29:13.841931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.832 qpair failed and we were unable to recover it. 00:24:00.832 [2024-10-08 18:29:13.851684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.832 [2024-10-08 18:29:13.851732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.832 [2024-10-08 18:29:13.851750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.832 [2024-10-08 18:29:13.851759] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.832 [2024-10-08 18:29:13.851772] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.832 [2024-10-08 18:29:13.862598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.832 qpair failed and we were unable to recover it. 00:24:00.832 [2024-10-08 18:29:13.871817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.832 [2024-10-08 18:29:13.871859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.832 [2024-10-08 18:29:13.871877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.832 [2024-10-08 18:29:13.871887] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.832 [2024-10-08 18:29:13.871895] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.832 [2024-10-08 18:29:13.881894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.832 qpair failed and we were unable to recover it. 00:24:00.832 [2024-10-08 18:29:13.891870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.832 [2024-10-08 18:29:13.891907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.832 [2024-10-08 18:29:13.891927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.832 [2024-10-08 18:29:13.891936] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.832 [2024-10-08 18:29:13.891945] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.833 [2024-10-08 18:29:13.902094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.833 qpair failed and we were unable to recover it. 00:24:00.833 [2024-10-08 18:29:13.911835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.833 [2024-10-08 18:29:13.911877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.833 [2024-10-08 18:29:13.911895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.833 [2024-10-08 18:29:13.911905] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.833 [2024-10-08 18:29:13.911913] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.833 [2024-10-08 18:29:13.922038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.833 qpair failed and we were unable to recover it. 00:24:00.833 [2024-10-08 18:29:13.931912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.833 [2024-10-08 18:29:13.931957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.833 [2024-10-08 18:29:13.931975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.833 [2024-10-08 18:29:13.931984] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.833 [2024-10-08 18:29:13.931993] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.833 [2024-10-08 18:29:13.942236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.833 qpair failed and we were unable to recover it. 00:24:00.833 [2024-10-08 18:29:13.952049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.833 [2024-10-08 18:29:13.952087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.833 [2024-10-08 18:29:13.952105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.833 [2024-10-08 18:29:13.952115] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.833 [2024-10-08 18:29:13.952124] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.833 [2024-10-08 18:29:13.962421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.833 qpair failed and we were unable to recover it. 00:24:00.833 [2024-10-08 18:29:13.972197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.833 [2024-10-08 18:29:13.972235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.833 [2024-10-08 18:29:13.972253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.833 [2024-10-08 18:29:13.972263] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.833 [2024-10-08 18:29:13.972271] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.833 [2024-10-08 18:29:13.982512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.833 qpair failed and we were unable to recover it. 00:24:00.833 [2024-10-08 18:29:13.992195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.833 [2024-10-08 18:29:13.992238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.833 [2024-10-08 18:29:13.992256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.833 [2024-10-08 18:29:13.992266] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.833 [2024-10-08 18:29:13.992275] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:00.833 [2024-10-08 18:29:14.002320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.833 qpair failed and we were unable to recover it. 00:24:01.092 [2024-10-08 18:29:14.012154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.092 [2024-10-08 18:29:14.012196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.092 [2024-10-08 18:29:14.012214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.092 [2024-10-08 18:29:14.012223] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.092 [2024-10-08 18:29:14.012232] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.092 [2024-10-08 18:29:14.022522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.092 qpair failed and we were unable to recover it. 00:24:01.092 [2024-10-08 18:29:14.032251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.092 [2024-10-08 18:29:14.032293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.092 [2024-10-08 18:29:14.032312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.092 [2024-10-08 18:29:14.032325] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.092 [2024-10-08 18:29:14.032334] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.092 [2024-10-08 18:29:14.042525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.092 qpair failed and we were unable to recover it. 00:24:01.092 [2024-10-08 18:29:14.052217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.092 [2024-10-08 18:29:14.052254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.092 [2024-10-08 18:29:14.052273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.092 [2024-10-08 18:29:14.052282] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.092 [2024-10-08 18:29:14.052291] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.092 [2024-10-08 18:29:14.062569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.092 qpair failed and we were unable to recover it. 00:24:01.092 [2024-10-08 18:29:14.072346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.092 [2024-10-08 18:29:14.072388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.092 [2024-10-08 18:29:14.072406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.092 [2024-10-08 18:29:14.072415] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.092 [2024-10-08 18:29:14.072424] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.092 [2024-10-08 18:29:14.082648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.092 qpair failed and we were unable to recover it. 00:24:01.092 [2024-10-08 18:29:14.092409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.092 [2024-10-08 18:29:14.092449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.092 [2024-10-08 18:29:14.092467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.092 [2024-10-08 18:29:14.092477] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.092 [2024-10-08 18:29:14.092485] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.092 [2024-10-08 18:29:14.102649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.092 qpair failed and we were unable to recover it. 00:24:01.092 [2024-10-08 18:29:14.112445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.092 [2024-10-08 18:29:14.112491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.092 [2024-10-08 18:29:14.112509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.092 [2024-10-08 18:29:14.112519] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.092 [2024-10-08 18:29:14.112528] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.092 [2024-10-08 18:29:14.122829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.092 qpair failed and we were unable to recover it. 00:24:01.092 [2024-10-08 18:29:14.132459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.092 [2024-10-08 18:29:14.132501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.092 [2024-10-08 18:29:14.132520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.092 [2024-10-08 18:29:14.132530] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.092 [2024-10-08 18:29:14.132538] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.092 [2024-10-08 18:29:14.142802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.093 qpair failed and we were unable to recover it. 00:24:01.093 [2024-10-08 18:29:14.152611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.093 [2024-10-08 18:29:14.152654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.093 [2024-10-08 18:29:14.152673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.093 [2024-10-08 18:29:14.152682] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.093 [2024-10-08 18:29:14.152691] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.093 [2024-10-08 18:29:14.162902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.093 qpair failed and we were unable to recover it. 00:24:01.093 [2024-10-08 18:29:14.172653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.093 [2024-10-08 18:29:14.172691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.093 [2024-10-08 18:29:14.172709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.093 [2024-10-08 18:29:14.172719] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.093 [2024-10-08 18:29:14.172728] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.093 [2024-10-08 18:29:14.182923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.093 qpair failed and we were unable to recover it. 00:24:01.093 [2024-10-08 18:29:14.192678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.093 [2024-10-08 18:29:14.192720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.093 [2024-10-08 18:29:14.192738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.093 [2024-10-08 18:29:14.192748] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.093 [2024-10-08 18:29:14.192757] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.093 [2024-10-08 18:29:14.202851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.093 qpair failed and we were unable to recover it. 00:24:01.093 [2024-10-08 18:29:14.212741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.093 [2024-10-08 18:29:14.212786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.093 [2024-10-08 18:29:14.212807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.093 [2024-10-08 18:29:14.212817] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.093 [2024-10-08 18:29:14.212826] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.093 [2024-10-08 18:29:14.223136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.093 qpair failed and we were unable to recover it. 00:24:01.093 [2024-10-08 18:29:14.232824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.093 [2024-10-08 18:29:14.232867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.093 [2024-10-08 18:29:14.232885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.093 [2024-10-08 18:29:14.232895] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.093 [2024-10-08 18:29:14.232903] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.093 [2024-10-08 18:29:14.242962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.093 qpair failed and we were unable to recover it. 00:24:01.093 [2024-10-08 18:29:14.252775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.093 [2024-10-08 18:29:14.252817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.093 [2024-10-08 18:29:14.252836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.093 [2024-10-08 18:29:14.252845] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.093 [2024-10-08 18:29:14.252854] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.093 [2024-10-08 18:29:14.263206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.093 qpair failed and we were unable to recover it. 00:24:01.352 [2024-10-08 18:29:14.272902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.352 [2024-10-08 18:29:14.272942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.352 [2024-10-08 18:29:14.272960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.352 [2024-10-08 18:29:14.272970] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.352 [2024-10-08 18:29:14.272979] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.353 [2024-10-08 18:29:14.283134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.353 qpair failed and we were unable to recover it. 00:24:01.353 [2024-10-08 18:29:14.292932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.353 [2024-10-08 18:29:14.292973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.353 [2024-10-08 18:29:14.292991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.353 [2024-10-08 18:29:14.293006] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.353 [2024-10-08 18:29:14.293021] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.353 [2024-10-08 18:29:14.303324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.353 qpair failed and we were unable to recover it. 00:24:01.353 [2024-10-08 18:29:14.313034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.353 [2024-10-08 18:29:14.313076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.353 [2024-10-08 18:29:14.313095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.353 [2024-10-08 18:29:14.313104] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.353 [2024-10-08 18:29:14.313113] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.353 [2024-10-08 18:29:14.323279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.353 qpair failed and we were unable to recover it. 00:24:01.353 [2024-10-08 18:29:14.333100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.353 [2024-10-08 18:29:14.333139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.353 [2024-10-08 18:29:14.333158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.353 [2024-10-08 18:29:14.333167] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.353 [2024-10-08 18:29:14.333176] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.353 [2024-10-08 18:29:14.343409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.353 qpair failed and we were unable to recover it. 00:24:01.353 [2024-10-08 18:29:14.353195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.353 [2024-10-08 18:29:14.353234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.353 [2024-10-08 18:29:14.353252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.353 [2024-10-08 18:29:14.353261] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.353 [2024-10-08 18:29:14.353270] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.353 [2024-10-08 18:29:14.363519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.353 qpair failed and we were unable to recover it. 00:24:01.353 [2024-10-08 18:29:14.373190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.353 [2024-10-08 18:29:14.373230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.353 [2024-10-08 18:29:14.373249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.353 [2024-10-08 18:29:14.373259] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.353 [2024-10-08 18:29:14.373267] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.353 [2024-10-08 18:29:14.383557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.353 qpair failed and we were unable to recover it. 00:24:01.353 [2024-10-08 18:29:14.393229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.353 [2024-10-08 18:29:14.393273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.353 [2024-10-08 18:29:14.393292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.353 [2024-10-08 18:29:14.393301] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.353 [2024-10-08 18:29:14.393310] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.353 [2024-10-08 18:29:14.403497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.353 qpair failed and we were unable to recover it. 00:24:01.353 [2024-10-08 18:29:14.413287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.353 [2024-10-08 18:29:14.413332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.353 [2024-10-08 18:29:14.413350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.353 [2024-10-08 18:29:14.413360] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.353 [2024-10-08 18:29:14.413368] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.353 [2024-10-08 18:29:14.423757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.353 qpair failed and we were unable to recover it. 00:24:01.353 [2024-10-08 18:29:14.433412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.353 [2024-10-08 18:29:14.433448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.353 [2024-10-08 18:29:14.433467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.353 [2024-10-08 18:29:14.433477] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.353 [2024-10-08 18:29:14.433485] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.353 [2024-10-08 18:29:14.443735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.353 qpair failed and we were unable to recover it. 00:24:01.353 [2024-10-08 18:29:14.453476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.353 [2024-10-08 18:29:14.453512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.353 [2024-10-08 18:29:14.453530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.353 [2024-10-08 18:29:14.453540] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.353 [2024-10-08 18:29:14.453549] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.353 [2024-10-08 18:29:14.463820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.353 qpair failed and we were unable to recover it. 00:24:01.353 [2024-10-08 18:29:14.473424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.353 [2024-10-08 18:29:14.473466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.353 [2024-10-08 18:29:14.473484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.353 [2024-10-08 18:29:14.473497] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.353 [2024-10-08 18:29:14.473505] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.353 [2024-10-08 18:29:14.483877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.353 qpair failed and we were unable to recover it. 00:24:01.353 [2024-10-08 18:29:14.493481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.353 [2024-10-08 18:29:14.493522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.353 [2024-10-08 18:29:14.493541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.353 [2024-10-08 18:29:14.493551] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.353 [2024-10-08 18:29:14.493559] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.353 [2024-10-08 18:29:14.504175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.353 qpair failed and we were unable to recover it. 00:24:01.353 [2024-10-08 18:29:14.513630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.353 [2024-10-08 18:29:14.513667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.353 [2024-10-08 18:29:14.513686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.353 [2024-10-08 18:29:14.513695] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.353 [2024-10-08 18:29:14.513704] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.353 [2024-10-08 18:29:14.523997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.353 qpair failed and we were unable to recover it. 00:24:01.673 [2024-10-08 18:29:14.533669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.673 [2024-10-08 18:29:14.533714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.673 [2024-10-08 18:29:14.533734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.673 [2024-10-08 18:29:14.533744] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.673 [2024-10-08 18:29:14.533753] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.673 [2024-10-08 18:29:14.544025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.673 qpair failed and we were unable to recover it. 00:24:01.673 [2024-10-08 18:29:14.553735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.673 [2024-10-08 18:29:14.553782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.673 [2024-10-08 18:29:14.553801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.673 [2024-10-08 18:29:14.553811] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.673 [2024-10-08 18:29:14.553820] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.673 [2024-10-08 18:29:14.563959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.673 qpair failed and we were unable to recover it. 00:24:01.673 [2024-10-08 18:29:14.573762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.673 [2024-10-08 18:29:14.573811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.673 [2024-10-08 18:29:14.573830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.673 [2024-10-08 18:29:14.573839] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.673 [2024-10-08 18:29:14.573848] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.673 [2024-10-08 18:29:14.584163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.673 qpair failed and we were unable to recover it. 00:24:01.673 [2024-10-08 18:29:14.593853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.673 [2024-10-08 18:29:14.593892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.673 [2024-10-08 18:29:14.593910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.673 [2024-10-08 18:29:14.593920] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.673 [2024-10-08 18:29:14.593929] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.673 [2024-10-08 18:29:14.604092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.673 qpair failed and we were unable to recover it. 00:24:01.673 [2024-10-08 18:29:14.613967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.673 [2024-10-08 18:29:14.614020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.673 [2024-10-08 18:29:14.614039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.673 [2024-10-08 18:29:14.614049] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.673 [2024-10-08 18:29:14.614058] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.673 [2024-10-08 18:29:14.624229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.673 qpair failed and we were unable to recover it. 00:24:01.673 [2024-10-08 18:29:14.633963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.673 [2024-10-08 18:29:14.634014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.673 [2024-10-08 18:29:14.634031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.673 [2024-10-08 18:29:14.634041] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.673 [2024-10-08 18:29:14.634050] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.673 [2024-10-08 18:29:14.644234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.673 qpair failed and we were unable to recover it. 00:24:01.673 [2024-10-08 18:29:14.653985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.673 [2024-10-08 18:29:14.654037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.673 [2024-10-08 18:29:14.654059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.673 [2024-10-08 18:29:14.654068] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.673 [2024-10-08 18:29:14.654077] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.673 [2024-10-08 18:29:14.664395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.673 qpair failed and we were unable to recover it. 00:24:01.673 [2024-10-08 18:29:14.674098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.673 [2024-10-08 18:29:14.674136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.673 [2024-10-08 18:29:14.674155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.673 [2024-10-08 18:29:14.674164] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.673 [2024-10-08 18:29:14.674173] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.673 [2024-10-08 18:29:14.684186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.673 qpair failed and we were unable to recover it. 00:24:01.673 [2024-10-08 18:29:14.694166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.673 [2024-10-08 18:29:14.694206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.673 [2024-10-08 18:29:14.694224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.673 [2024-10-08 18:29:14.694234] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.673 [2024-10-08 18:29:14.694243] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.673 [2024-10-08 18:29:14.704499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.673 qpair failed and we were unable to recover it. 00:24:01.673 [2024-10-08 18:29:14.714224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.673 [2024-10-08 18:29:14.714265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.673 [2024-10-08 18:29:14.714283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.673 [2024-10-08 18:29:14.714293] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.673 [2024-10-08 18:29:14.714301] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.673 [2024-10-08 18:29:14.724496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.673 qpair failed and we were unable to recover it. 00:24:01.673 [2024-10-08 18:29:14.734190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.673 [2024-10-08 18:29:14.734229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.673 [2024-10-08 18:29:14.734247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.673 [2024-10-08 18:29:14.734257] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.673 [2024-10-08 18:29:14.734265] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.673 [2024-10-08 18:29:14.744624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.673 qpair failed and we were unable to recover it. 00:24:01.673 [2024-10-08 18:29:14.754307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.673 [2024-10-08 18:29:14.754348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.673 [2024-10-08 18:29:14.754367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.673 [2024-10-08 18:29:14.754377] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.674 [2024-10-08 18:29:14.754386] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.674 [2024-10-08 18:29:14.764749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.674 qpair failed and we were unable to recover it. 00:24:01.674 [2024-10-08 18:29:14.774306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.674 [2024-10-08 18:29:14.774344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.674 [2024-10-08 18:29:14.774362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.674 [2024-10-08 18:29:14.774372] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.674 [2024-10-08 18:29:14.774380] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.674 [2024-10-08 18:29:14.784826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.674 qpair failed and we were unable to recover it. 00:24:01.674 [2024-10-08 18:29:14.794474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.674 [2024-10-08 18:29:14.794515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.674 [2024-10-08 18:29:14.794533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.674 [2024-10-08 18:29:14.794542] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.674 [2024-10-08 18:29:14.794551] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.674 [2024-10-08 18:29:14.804680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.674 qpair failed and we were unable to recover it. 00:24:01.933 [2024-10-08 18:29:14.814545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.933 [2024-10-08 18:29:14.814586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.933 [2024-10-08 18:29:14.814604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.933 [2024-10-08 18:29:14.814614] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.933 [2024-10-08 18:29:14.814624] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.933 [2024-10-08 18:29:14.824719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.933 qpair failed and we were unable to recover it. 00:24:01.933 [2024-10-08 18:29:14.834476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.933 [2024-10-08 18:29:14.834526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.933 [2024-10-08 18:29:14.834544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.934 [2024-10-08 18:29:14.834554] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.934 [2024-10-08 18:29:14.834562] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.934 [2024-10-08 18:29:14.844771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.934 qpair failed and we were unable to recover it. 00:24:01.934 [2024-10-08 18:29:14.854607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.934 [2024-10-08 18:29:14.854652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.934 [2024-10-08 18:29:14.854670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.934 [2024-10-08 18:29:14.854680] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.934 [2024-10-08 18:29:14.854688] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.934 [2024-10-08 18:29:14.864928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.934 qpair failed and we were unable to recover it. 00:24:01.934 [2024-10-08 18:29:14.874666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.934 [2024-10-08 18:29:14.874709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.934 [2024-10-08 18:29:14.874726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.934 [2024-10-08 18:29:14.874736] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.934 [2024-10-08 18:29:14.874746] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.934 [2024-10-08 18:29:14.884838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.934 qpair failed and we were unable to recover it. 00:24:01.934 [2024-10-08 18:29:14.894684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.934 [2024-10-08 18:29:14.894732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.934 [2024-10-08 18:29:14.894750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.934 [2024-10-08 18:29:14.894760] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.934 [2024-10-08 18:29:14.894769] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.934 [2024-10-08 18:29:14.905172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.934 qpair failed and we were unable to recover it. 00:24:01.934 [2024-10-08 18:29:14.914807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.934 [2024-10-08 18:29:14.914851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.934 [2024-10-08 18:29:14.914870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.934 [2024-10-08 18:29:14.914883] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.934 [2024-10-08 18:29:14.914892] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.934 [2024-10-08 18:29:14.924946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.934 qpair failed and we were unable to recover it. 00:24:01.934 [2024-10-08 18:29:14.934765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.934 [2024-10-08 18:29:14.934810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.934 [2024-10-08 18:29:14.934828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.934 [2024-10-08 18:29:14.934838] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.934 [2024-10-08 18:29:14.934846] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.934 [2024-10-08 18:29:14.945074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.934 qpair failed and we were unable to recover it. 00:24:01.934 [2024-10-08 18:29:14.954852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.934 [2024-10-08 18:29:14.954894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.934 [2024-10-08 18:29:14.954912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.934 [2024-10-08 18:29:14.954921] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.934 [2024-10-08 18:29:14.954930] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.934 [2024-10-08 18:29:14.965192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.934 qpair failed and we were unable to recover it. 00:24:01.934 [2024-10-08 18:29:14.974980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.934 [2024-10-08 18:29:14.975031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.934 [2024-10-08 18:29:14.975049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.934 [2024-10-08 18:29:14.975059] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.934 [2024-10-08 18:29:14.975068] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.934 [2024-10-08 18:29:14.985276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.934 qpair failed and we were unable to recover it. 00:24:01.934 [2024-10-08 18:29:14.994957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.934 [2024-10-08 18:29:14.994997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.934 [2024-10-08 18:29:14.995020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.934 [2024-10-08 18:29:14.995029] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.934 [2024-10-08 18:29:14.995038] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.934 [2024-10-08 18:29:15.005245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.934 qpair failed and we were unable to recover it. 00:24:01.934 [2024-10-08 18:29:15.014936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.934 [2024-10-08 18:29:15.014978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.934 [2024-10-08 18:29:15.014997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.934 [2024-10-08 18:29:15.015011] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.934 [2024-10-08 18:29:15.015020] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.934 [2024-10-08 18:29:15.025206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.934 qpair failed and we were unable to recover it. 00:24:01.934 [2024-10-08 18:29:15.035184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.934 [2024-10-08 18:29:15.035224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.934 [2024-10-08 18:29:15.035243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.934 [2024-10-08 18:29:15.035253] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.934 [2024-10-08 18:29:15.035262] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.934 [2024-10-08 18:29:15.045259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.934 qpair failed and we were unable to recover it. 00:24:01.934 [2024-10-08 18:29:15.055169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.934 [2024-10-08 18:29:15.055210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.934 [2024-10-08 18:29:15.055229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.934 [2024-10-08 18:29:15.055238] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.934 [2024-10-08 18:29:15.055247] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.934 [2024-10-08 18:29:15.065638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.934 qpair failed and we were unable to recover it. 00:24:01.934 [2024-10-08 18:29:15.075246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.934 [2024-10-08 18:29:15.075289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.934 [2024-10-08 18:29:15.075307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.934 [2024-10-08 18:29:15.075317] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.934 [2024-10-08 18:29:15.075326] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.934 [2024-10-08 18:29:15.085617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.934 qpair failed and we were unable to recover it. 00:24:01.934 [2024-10-08 18:29:15.095420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.934 [2024-10-08 18:29:15.095458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.934 [2024-10-08 18:29:15.095480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.934 [2024-10-08 18:29:15.095489] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.934 [2024-10-08 18:29:15.095498] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:01.934 [2024-10-08 18:29:15.105764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.934 qpair failed and we were unable to recover it. 00:24:02.193 [2024-10-08 18:29:15.115348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.194 [2024-10-08 18:29:15.115388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.194 [2024-10-08 18:29:15.115406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.194 [2024-10-08 18:29:15.115415] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.194 [2024-10-08 18:29:15.115424] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.194 [2024-10-08 18:29:15.125581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.194 qpair failed and we were unable to recover it. 00:24:02.194 [2024-10-08 18:29:15.135500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.194 [2024-10-08 18:29:15.135547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.194 [2024-10-08 18:29:15.135566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.194 [2024-10-08 18:29:15.135575] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.194 [2024-10-08 18:29:15.135585] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.194 [2024-10-08 18:29:15.146045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.194 qpair failed and we were unable to recover it. 00:24:02.194 [2024-10-08 18:29:15.155438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.194 [2024-10-08 18:29:15.155478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.194 [2024-10-08 18:29:15.155497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.194 [2024-10-08 18:29:15.155506] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.194 [2024-10-08 18:29:15.155515] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.194 [2024-10-08 18:29:15.165728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.194 qpair failed and we were unable to recover it. 00:24:02.194 [2024-10-08 18:29:15.175356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.194 [2024-10-08 18:29:15.175398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.194 [2024-10-08 18:29:15.175416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.194 [2024-10-08 18:29:15.175425] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.194 [2024-10-08 18:29:15.175434] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.194 [2024-10-08 18:29:15.185837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.194 qpair failed and we were unable to recover it. 00:24:02.194 [2024-10-08 18:29:15.195575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.194 [2024-10-08 18:29:15.195616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.194 [2024-10-08 18:29:15.195634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.194 [2024-10-08 18:29:15.195643] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.194 [2024-10-08 18:29:15.195652] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.194 [2024-10-08 18:29:15.205866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.194 qpair failed and we were unable to recover it. 00:24:02.194 [2024-10-08 18:29:15.215647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.194 [2024-10-08 18:29:15.215692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.194 [2024-10-08 18:29:15.215710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.194 [2024-10-08 18:29:15.215720] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.194 [2024-10-08 18:29:15.215728] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.194 [2024-10-08 18:29:15.225978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.194 qpair failed and we were unable to recover it. 00:24:02.194 [2024-10-08 18:29:15.235729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.194 [2024-10-08 18:29:15.235765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.194 [2024-10-08 18:29:15.235784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.194 [2024-10-08 18:29:15.235793] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.194 [2024-10-08 18:29:15.235801] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.194 [2024-10-08 18:29:15.246012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.194 qpair failed and we were unable to recover it. 00:24:02.194 [2024-10-08 18:29:15.255727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.194 [2024-10-08 18:29:15.255771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.194 [2024-10-08 18:29:15.255789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.194 [2024-10-08 18:29:15.255798] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.194 [2024-10-08 18:29:15.255807] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.194 [2024-10-08 18:29:15.266118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.194 qpair failed and we were unable to recover it. 00:24:02.194 [2024-10-08 18:29:15.275839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.194 [2024-10-08 18:29:15.275888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.194 [2024-10-08 18:29:15.275906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.194 [2024-10-08 18:29:15.275916] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.194 [2024-10-08 18:29:15.275924] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.194 [2024-10-08 18:29:15.286144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.194 qpair failed and we were unable to recover it. 00:24:02.194 [2024-10-08 18:29:15.295930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.194 [2024-10-08 18:29:15.295976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.194 [2024-10-08 18:29:15.295995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.194 [2024-10-08 18:29:15.296009] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.194 [2024-10-08 18:29:15.296017] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.194 [2024-10-08 18:29:15.306216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.194 qpair failed and we were unable to recover it. 00:24:02.194 [2024-10-08 18:29:15.315991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.194 [2024-10-08 18:29:15.316039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.194 [2024-10-08 18:29:15.316059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.194 [2024-10-08 18:29:15.316069] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.194 [2024-10-08 18:29:15.316078] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.194 [2024-10-08 18:29:15.326224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.194 qpair failed and we were unable to recover it. 00:24:02.194 [2024-10-08 18:29:15.336049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.194 [2024-10-08 18:29:15.336094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.194 [2024-10-08 18:29:15.336113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.194 [2024-10-08 18:29:15.336123] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.194 [2024-10-08 18:29:15.336132] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.194 [2024-10-08 18:29:15.346214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.194 qpair failed and we were unable to recover it. 00:24:02.194 [2024-10-08 18:29:15.356089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.194 [2024-10-08 18:29:15.356132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.194 [2024-10-08 18:29:15.356150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.194 [2024-10-08 18:29:15.356160] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.194 [2024-10-08 18:29:15.356171] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.454 [2024-10-08 18:29:15.366453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.454 qpair failed and we were unable to recover it. 00:24:02.454 [2024-10-08 18:29:15.376196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.454 [2024-10-08 18:29:15.376239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.454 [2024-10-08 18:29:15.376258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.454 [2024-10-08 18:29:15.376267] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.454 [2024-10-08 18:29:15.376276] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.454 [2024-10-08 18:29:15.386175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.454 qpair failed and we were unable to recover it. 00:24:02.454 [2024-10-08 18:29:15.396236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.454 [2024-10-08 18:29:15.396279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.454 [2024-10-08 18:29:15.396297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.454 [2024-10-08 18:29:15.396307] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.454 [2024-10-08 18:29:15.396317] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.454 [2024-10-08 18:29:15.406437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.454 qpair failed and we were unable to recover it. 00:24:02.454 [2024-10-08 18:29:15.416331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.454 [2024-10-08 18:29:15.416371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.454 [2024-10-08 18:29:15.416389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.454 [2024-10-08 18:29:15.416398] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.454 [2024-10-08 18:29:15.416407] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.454 [2024-10-08 18:29:15.426536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.454 qpair failed and we were unable to recover it. 00:24:02.454 [2024-10-08 18:29:15.436400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.454 [2024-10-08 18:29:15.436443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.454 [2024-10-08 18:29:15.436461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.454 [2024-10-08 18:29:15.436471] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.454 [2024-10-08 18:29:15.436480] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.454 [2024-10-08 18:29:15.446514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.454 qpair failed and we were unable to recover it. 00:24:02.454 [2024-10-08 18:29:15.456587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.455 [2024-10-08 18:29:15.456628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.455 [2024-10-08 18:29:15.456647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.455 [2024-10-08 18:29:15.456656] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.455 [2024-10-08 18:29:15.456666] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.455 [2024-10-08 18:29:15.466768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.455 qpair failed and we were unable to recover it. 00:24:02.455 [2024-10-08 18:29:15.476417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.455 [2024-10-08 18:29:15.476460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.455 [2024-10-08 18:29:15.476478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.455 [2024-10-08 18:29:15.476488] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.455 [2024-10-08 18:29:15.476497] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.455 [2024-10-08 18:29:15.486757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.455 qpair failed and we were unable to recover it. 00:24:02.455 [2024-10-08 18:29:15.496584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.455 [2024-10-08 18:29:15.496624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.455 [2024-10-08 18:29:15.496642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.455 [2024-10-08 18:29:15.496652] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.455 [2024-10-08 18:29:15.496661] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.455 [2024-10-08 18:29:15.506879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.455 qpair failed and we were unable to recover it. 00:24:02.455 [2024-10-08 18:29:15.516572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.455 [2024-10-08 18:29:15.516615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.455 [2024-10-08 18:29:15.516634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.455 [2024-10-08 18:29:15.516643] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.455 [2024-10-08 18:29:15.516652] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.455 [2024-10-08 18:29:15.526893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.455 qpair failed and we were unable to recover it. 00:24:02.455 [2024-10-08 18:29:15.536653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.455 [2024-10-08 18:29:15.536695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.455 [2024-10-08 18:29:15.536716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.455 [2024-10-08 18:29:15.536726] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.455 [2024-10-08 18:29:15.536735] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.455 [2024-10-08 18:29:15.546815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.455 qpair failed and we were unable to recover it. 00:24:02.455 [2024-10-08 18:29:15.556697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.455 [2024-10-08 18:29:15.556740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.455 [2024-10-08 18:29:15.556759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.455 [2024-10-08 18:29:15.556768] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.455 [2024-10-08 18:29:15.556777] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.455 [2024-10-08 18:29:15.566937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.455 qpair failed and we were unable to recover it. 00:24:02.455 [2024-10-08 18:29:15.576769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.455 [2024-10-08 18:29:15.576808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.455 [2024-10-08 18:29:15.576826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.455 [2024-10-08 18:29:15.576835] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.455 [2024-10-08 18:29:15.576844] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.455 [2024-10-08 18:29:15.587073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.455 qpair failed and we were unable to recover it. 00:24:02.455 [2024-10-08 18:29:15.596845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.455 [2024-10-08 18:29:15.596886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.455 [2024-10-08 18:29:15.596904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.455 [2024-10-08 18:29:15.596914] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.455 [2024-10-08 18:29:15.596922] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.455 [2024-10-08 18:29:15.606876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.455 qpair failed and we were unable to recover it. 00:24:02.455 [2024-10-08 18:29:15.616963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.455 [2024-10-08 18:29:15.617011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.455 [2024-10-08 18:29:15.617029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.455 [2024-10-08 18:29:15.617039] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.455 [2024-10-08 18:29:15.617048] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.714 [2024-10-08 18:29:15.627338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.714 qpair failed and we were unable to recover it. 00:24:02.714 [2024-10-08 18:29:15.636983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.714 [2024-10-08 18:29:15.637026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.714 [2024-10-08 18:29:15.637044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.714 [2024-10-08 18:29:15.637054] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.714 [2024-10-08 18:29:15.637063] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.714 [2024-10-08 18:29:15.647187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.714 qpair failed and we were unable to recover it. 00:24:02.714 [2024-10-08 18:29:15.657028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.714 [2024-10-08 18:29:15.657067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.714 [2024-10-08 18:29:15.657086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.714 [2024-10-08 18:29:15.657096] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.714 [2024-10-08 18:29:15.657105] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.714 [2024-10-08 18:29:15.667390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.714 qpair failed and we were unable to recover it. 00:24:02.714 [2024-10-08 18:29:15.677053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.714 [2024-10-08 18:29:15.677095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.714 [2024-10-08 18:29:15.677120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.714 [2024-10-08 18:29:15.677130] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.714 [2024-10-08 18:29:15.677138] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.714 [2024-10-08 18:29:15.687091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.715 qpair failed and we were unable to recover it. 00:24:02.715 [2024-10-08 18:29:15.697213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.715 [2024-10-08 18:29:15.697257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.715 [2024-10-08 18:29:15.697275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.715 [2024-10-08 18:29:15.697285] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.715 [2024-10-08 18:29:15.697294] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.715 [2024-10-08 18:29:15.707418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.715 qpair failed and we were unable to recover it. 00:24:02.715 [2024-10-08 18:29:15.717382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.715 [2024-10-08 18:29:15.717424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.715 [2024-10-08 18:29:15.717446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.715 [2024-10-08 18:29:15.717456] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.715 [2024-10-08 18:29:15.717464] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.715 [2024-10-08 18:29:15.727473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.715 qpair failed and we were unable to recover it. 00:24:02.715 [2024-10-08 18:29:15.737211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.715 [2024-10-08 18:29:15.737250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.715 [2024-10-08 18:29:15.737268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.715 [2024-10-08 18:29:15.737278] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.715 [2024-10-08 18:29:15.737287] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.715 [2024-10-08 18:29:15.747424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.715 qpair failed and we were unable to recover it. 00:24:02.715 [2024-10-08 18:29:15.757289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.715 [2024-10-08 18:29:15.757332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.715 [2024-10-08 18:29:15.757351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.715 [2024-10-08 18:29:15.757360] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.715 [2024-10-08 18:29:15.757369] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.715 [2024-10-08 18:29:15.767648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.715 qpair failed and we were unable to recover it. 00:24:02.715 [2024-10-08 18:29:15.777257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.715 [2024-10-08 18:29:15.777300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.715 [2024-10-08 18:29:15.777319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.715 [2024-10-08 18:29:15.777328] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.715 [2024-10-08 18:29:15.777337] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.715 [2024-10-08 18:29:15.787993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.715 qpair failed and we were unable to recover it. 00:24:02.715 [2024-10-08 18:29:15.797331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.715 [2024-10-08 18:29:15.797374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.715 [2024-10-08 18:29:15.797392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.715 [2024-10-08 18:29:15.797401] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.715 [2024-10-08 18:29:15.797413] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.715 [2024-10-08 18:29:15.807661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.715 qpair failed and we were unable to recover it. 00:24:02.715 [2024-10-08 18:29:15.817455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.715 [2024-10-08 18:29:15.817493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.715 [2024-10-08 18:29:15.817511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.715 [2024-10-08 18:29:15.817521] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.715 [2024-10-08 18:29:15.817530] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.715 [2024-10-08 18:29:15.827632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.715 qpair failed and we were unable to recover it. 00:24:02.715 [2024-10-08 18:29:15.837595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.715 [2024-10-08 18:29:15.837637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.715 [2024-10-08 18:29:15.837655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.715 [2024-10-08 18:29:15.837664] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.715 [2024-10-08 18:29:15.837673] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.715 [2024-10-08 18:29:15.847748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.715 qpair failed and we were unable to recover it. 00:24:02.715 [2024-10-08 18:29:15.857705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.715 [2024-10-08 18:29:15.857752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.715 [2024-10-08 18:29:15.857771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.715 [2024-10-08 18:29:15.857781] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.715 [2024-10-08 18:29:15.857789] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.715 [2024-10-08 18:29:15.867986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.715 qpair failed and we were unable to recover it. 00:24:02.715 [2024-10-08 18:29:15.877660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.715 [2024-10-08 18:29:15.877697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.715 [2024-10-08 18:29:15.877715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.715 [2024-10-08 18:29:15.877724] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.715 [2024-10-08 18:29:15.877733] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.974 [2024-10-08 18:29:15.888012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.974 qpair failed and we were unable to recover it. 00:24:02.974 [2024-10-08 18:29:15.897651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.974 [2024-10-08 18:29:15.897691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.974 [2024-10-08 18:29:15.897710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.974 [2024-10-08 18:29:15.897719] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.974 [2024-10-08 18:29:15.897728] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.974 [2024-10-08 18:29:15.908054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.974 qpair failed and we were unable to recover it. 00:24:02.974 [2024-10-08 18:29:15.917718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.974 [2024-10-08 18:29:15.917758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.974 [2024-10-08 18:29:15.917776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.974 [2024-10-08 18:29:15.917786] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.974 [2024-10-08 18:29:15.917795] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.974 [2024-10-08 18:29:15.927960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.974 qpair failed and we were unable to recover it. 00:24:02.974 [2024-10-08 18:29:15.937784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.974 [2024-10-08 18:29:15.937833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.974 [2024-10-08 18:29:15.937851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.974 [2024-10-08 18:29:15.937861] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.974 [2024-10-08 18:29:15.937870] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.974 [2024-10-08 18:29:15.948226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.974 qpair failed and we were unable to recover it. 00:24:02.974 [2024-10-08 18:29:15.957844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.974 [2024-10-08 18:29:15.957881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.974 [2024-10-08 18:29:15.957899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.974 [2024-10-08 18:29:15.957909] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.974 [2024-10-08 18:29:15.957918] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.974 [2024-10-08 18:29:15.968115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.974 qpair failed and we were unable to recover it. 00:24:02.974 [2024-10-08 18:29:15.977936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.974 [2024-10-08 18:29:15.977979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.974 [2024-10-08 18:29:15.977997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.974 [2024-10-08 18:29:15.978016] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.974 [2024-10-08 18:29:15.978025] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.974 [2024-10-08 18:29:15.988173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.974 qpair failed and we were unable to recover it. 00:24:02.974 [2024-10-08 18:29:15.997908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.974 [2024-10-08 18:29:15.997951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.974 [2024-10-08 18:29:15.997970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.974 [2024-10-08 18:29:15.997979] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.974 [2024-10-08 18:29:15.997988] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.974 [2024-10-08 18:29:16.008098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.974 qpair failed and we were unable to recover it. 00:24:02.974 [2024-10-08 18:29:16.018071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.974 [2024-10-08 18:29:16.018119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.974 [2024-10-08 18:29:16.018138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.974 [2024-10-08 18:29:16.018147] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.974 [2024-10-08 18:29:16.018156] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.974 [2024-10-08 18:29:16.028533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.974 qpair failed and we were unable to recover it. 00:24:02.974 [2024-10-08 18:29:16.038096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.974 [2024-10-08 18:29:16.038134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.974 [2024-10-08 18:29:16.038152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.974 [2024-10-08 18:29:16.038162] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.974 [2024-10-08 18:29:16.038171] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.974 [2024-10-08 18:29:16.048503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.974 qpair failed and we were unable to recover it. 00:24:02.974 [2024-10-08 18:29:16.058162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.975 [2024-10-08 18:29:16.058205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.975 [2024-10-08 18:29:16.058223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.975 [2024-10-08 18:29:16.058232] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.975 [2024-10-08 18:29:16.058241] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.975 [2024-10-08 18:29:16.068404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.975 qpair failed and we were unable to recover it. 00:24:02.975 [2024-10-08 18:29:16.078215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.975 [2024-10-08 18:29:16.078259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.975 [2024-10-08 18:29:16.078278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.975 [2024-10-08 18:29:16.078287] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.975 [2024-10-08 18:29:16.078296] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.975 [2024-10-08 18:29:16.088573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.975 qpair failed and we were unable to recover it. 00:24:02.975 [2024-10-08 18:29:16.098227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.975 [2024-10-08 18:29:16.098269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.975 [2024-10-08 18:29:16.098288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.975 [2024-10-08 18:29:16.098297] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.975 [2024-10-08 18:29:16.098306] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.975 [2024-10-08 18:29:16.108737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.975 qpair failed and we were unable to recover it. 00:24:02.975 [2024-10-08 18:29:16.118283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.975 [2024-10-08 18:29:16.118329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.975 [2024-10-08 18:29:16.118347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.975 [2024-10-08 18:29:16.118356] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.975 [2024-10-08 18:29:16.118365] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:02.975 [2024-10-08 18:29:16.128712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.975 qpair failed and we were unable to recover it. 00:24:02.975 [2024-10-08 18:29:16.138398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.975 [2024-10-08 18:29:16.138445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.975 [2024-10-08 18:29:16.138463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.975 [2024-10-08 18:29:16.138473] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.975 [2024-10-08 18:29:16.138482] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:03.234 [2024-10-08 18:29:16.148717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.234 qpair failed and we were unable to recover it. 00:24:03.234 [2024-10-08 18:29:16.158416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.234 [2024-10-08 18:29:16.158458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.234 [2024-10-08 18:29:16.158480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.234 [2024-10-08 18:29:16.158489] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.234 [2024-10-08 18:29:16.158498] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:03.234 [2024-10-08 18:29:16.168653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.234 qpair failed and we were unable to recover it. 00:24:03.234 [2024-10-08 18:29:16.178612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.234 [2024-10-08 18:29:16.178658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.234 [2024-10-08 18:29:16.178676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.234 [2024-10-08 18:29:16.178686] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.234 [2024-10-08 18:29:16.178695] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:03.234 [2024-10-08 18:29:16.188908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.234 qpair failed and we were unable to recover it. 00:24:03.234 [2024-10-08 18:29:16.198552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.234 [2024-10-08 18:29:16.198598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.234 [2024-10-08 18:29:16.198616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.234 [2024-10-08 18:29:16.198625] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.234 [2024-10-08 18:29:16.198634] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:03.234 [2024-10-08 18:29:16.208850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.234 qpair failed and we were unable to recover it. 00:24:03.234 [2024-10-08 18:29:16.218619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.234 [2024-10-08 18:29:16.218665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.234 [2024-10-08 18:29:16.218683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.234 [2024-10-08 18:29:16.218693] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.234 [2024-10-08 18:29:16.218701] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:03.234 [2024-10-08 18:29:16.229024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.234 qpair failed and we were unable to recover it. 00:24:03.234 [2024-10-08 18:29:16.238700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.234 [2024-10-08 18:29:16.238743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.234 [2024-10-08 18:29:16.238760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.234 [2024-10-08 18:29:16.238770] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.234 [2024-10-08 18:29:16.238782] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:03.234 [2024-10-08 18:29:16.249047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.234 qpair failed and we were unable to recover it. 00:24:03.234 [2024-10-08 18:29:16.258827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.234 [2024-10-08 18:29:16.258877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.234 [2024-10-08 18:29:16.258898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.234 [2024-10-08 18:29:16.258909] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.234 [2024-10-08 18:29:16.258919] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:03.234 [2024-10-08 18:29:16.269191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.234 qpair failed and we were unable to recover it. 00:24:03.234 [2024-10-08 18:29:16.278854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.234 [2024-10-08 18:29:16.278895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.234 [2024-10-08 18:29:16.278913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.234 [2024-10-08 18:29:16.278923] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.234 [2024-10-08 18:29:16.278931] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:03.234 [2024-10-08 18:29:16.289132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.234 qpair failed and we were unable to recover it. 00:24:03.234 [2024-10-08 18:29:16.298853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.234 [2024-10-08 18:29:16.298891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.234 [2024-10-08 18:29:16.298909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.234 [2024-10-08 18:29:16.298919] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.234 [2024-10-08 18:29:16.298928] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:03.234 [2024-10-08 18:29:16.309128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.234 qpair failed and we were unable to recover it. 00:24:03.234 [2024-10-08 18:29:16.319005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.234 [2024-10-08 18:29:16.319047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.234 [2024-10-08 18:29:16.319065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.234 [2024-10-08 18:29:16.319075] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.234 [2024-10-08 18:29:16.319084] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:03.234 [2024-10-08 18:29:16.329237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.234 qpair failed and we were unable to recover it. 00:24:03.234 [2024-10-08 18:29:16.338958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.234 [2024-10-08 18:29:16.339019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.234 [2024-10-08 18:29:16.339038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.234 [2024-10-08 18:29:16.339047] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.234 [2024-10-08 18:29:16.339056] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:03.234 [2024-10-08 18:29:16.349286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.234 qpair failed and we were unable to recover it. 00:24:03.235 [2024-10-08 18:29:16.359083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.235 [2024-10-08 18:29:16.359121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.235 [2024-10-08 18:29:16.359139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.235 [2024-10-08 18:29:16.359149] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.235 [2024-10-08 18:29:16.359157] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:03.235 [2024-10-08 18:29:16.369366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.235 qpair failed and we were unable to recover it. 00:24:03.235 [2024-10-08 18:29:16.379167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.235 [2024-10-08 18:29:16.379207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.235 [2024-10-08 18:29:16.379226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.235 [2024-10-08 18:29:16.379236] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.235 [2024-10-08 18:29:16.379245] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:03.235 [2024-10-08 18:29:16.389470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.235 qpair failed and we were unable to recover it. 00:24:03.235 [2024-10-08 18:29:16.399185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.235 [2024-10-08 18:29:16.399227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.235 [2024-10-08 18:29:16.399245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.235 [2024-10-08 18:29:16.399255] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.235 [2024-10-08 18:29:16.399264] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:03.494 [2024-10-08 18:29:16.409557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.494 qpair failed and we were unable to recover it. 00:24:03.494 [2024-10-08 18:29:16.419264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.494 [2024-10-08 18:29:16.419304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.494 [2024-10-08 18:29:16.419322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.494 [2024-10-08 18:29:16.419335] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.494 [2024-10-08 18:29:16.419344] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:03.494 [2024-10-08 18:29:16.430114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.494 qpair failed and we were unable to recover it. 00:24:03.494 [2024-10-08 18:29:16.439243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.494 [2024-10-08 18:29:16.439282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.494 [2024-10-08 18:29:16.439301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.494 [2024-10-08 18:29:16.439312] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.494 [2024-10-08 18:29:16.439321] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:03.494 [2024-10-08 18:29:16.449652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.494 qpair failed and we were unable to recover it. 00:24:03.494 [2024-10-08 18:29:16.459352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.494 [2024-10-08 18:29:16.459395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.494 [2024-10-08 18:29:16.459413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.494 [2024-10-08 18:29:16.459423] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.494 [2024-10-08 18:29:16.459432] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:03.494 [2024-10-08 18:29:16.469797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.494 qpair failed and we were unable to recover it. 00:24:03.494 [2024-10-08 18:29:16.479430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.494 [2024-10-08 18:29:16.479472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.494 [2024-10-08 18:29:16.479491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.494 [2024-10-08 18:29:16.479501] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.494 [2024-10-08 18:29:16.479510] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:03.494 [2024-10-08 18:29:16.489660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.494 qpair failed and we were unable to recover it. 00:24:03.494 [2024-10-08 18:29:16.499522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.494 [2024-10-08 18:29:16.499565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.494 [2024-10-08 18:29:16.499583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.494 [2024-10-08 18:29:16.499593] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.494 [2024-10-08 18:29:16.499602] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:03.494 [2024-10-08 18:29:16.509933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.494 qpair failed and we were unable to recover it. 00:24:03.494 [2024-10-08 18:29:16.519633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.494 [2024-10-08 18:29:16.519677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.494 [2024-10-08 18:29:16.519696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.494 [2024-10-08 18:29:16.519705] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.494 [2024-10-08 18:29:16.519714] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:03.494 [2024-10-08 18:29:16.529882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.494 qpair failed and we were unable to recover it. 00:24:03.494 [2024-10-08 18:29:16.539631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.494 [2024-10-08 18:29:16.539669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.494 [2024-10-08 18:29:16.539687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.494 [2024-10-08 18:29:16.539697] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.494 [2024-10-08 18:29:16.539706] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:03.494 [2024-10-08 18:29:16.550026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.494 qpair failed and we were unable to recover it. 00:24:03.494 [2024-10-08 18:29:16.559681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.494 [2024-10-08 18:29:16.559724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.494 [2024-10-08 18:29:16.559742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.494 [2024-10-08 18:29:16.559752] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.494 [2024-10-08 18:29:16.559761] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:03.494 [2024-10-08 18:29:16.570108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.494 qpair failed and we were unable to recover it. 00:24:03.494 [2024-10-08 18:29:16.579753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.494 [2024-10-08 18:29:16.579798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.494 [2024-10-08 18:29:16.579817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.494 [2024-10-08 18:29:16.579826] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.494 [2024-10-08 18:29:16.579835] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:03.494 [2024-10-08 18:29:16.590047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.494 qpair failed and we were unable to recover it. 00:24:03.494 [2024-10-08 18:29:16.599760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.494 [2024-10-08 18:29:16.599800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.494 [2024-10-08 18:29:16.599821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.494 [2024-10-08 18:29:16.599831] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.494 [2024-10-08 18:29:16.599840] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:03.494 [2024-10-08 18:29:16.610163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.494 qpair failed and we were unable to recover it. 00:24:03.494 [2024-10-08 18:29:16.619854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.494 [2024-10-08 18:29:16.619897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.494 [2024-10-08 18:29:16.619915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.494 [2024-10-08 18:29:16.619925] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.494 [2024-10-08 18:29:16.619934] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:03.494 [2024-10-08 18:29:16.630219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.494 qpair failed and we were unable to recover it. 00:24:03.494 [2024-10-08 18:29:16.639928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.494 [2024-10-08 18:29:16.639969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.494 [2024-10-08 18:29:16.639987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.494 [2024-10-08 18:29:16.639997] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.495 [2024-10-08 18:29:16.640010] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:03.495 [2024-10-08 18:29:16.650235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.495 qpair failed and we were unable to recover it. 00:24:03.495 [2024-10-08 18:29:16.659991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.495 [2024-10-08 18:29:16.660043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.495 [2024-10-08 18:29:16.660061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.495 [2024-10-08 18:29:16.660071] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.495 [2024-10-08 18:29:16.660079] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:03.753 [2024-10-08 18:29:16.670348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.753 qpair failed and we were unable to recover it. 00:24:03.753 [2024-10-08 18:29:16.680007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.753 [2024-10-08 18:29:16.680043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.753 [2024-10-08 18:29:16.680062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.753 [2024-10-08 18:29:16.680071] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.753 [2024-10-08 18:29:16.680080] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:03.753 [2024-10-08 18:29:16.690318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.753 qpair failed and we were unable to recover it. 00:24:03.753 [2024-10-08 18:29:16.700124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.753 [2024-10-08 18:29:16.700172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.753 [2024-10-08 18:29:16.700190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.753 [2024-10-08 18:29:16.700200] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.753 [2024-10-08 18:29:16.700209] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:03.753 [2024-10-08 18:29:16.710551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.753 qpair failed and we were unable to recover it. 00:24:03.753 [2024-10-08 18:29:16.720129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.753 [2024-10-08 18:29:16.720172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.753 [2024-10-08 18:29:16.720191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.753 [2024-10-08 18:29:16.720200] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.753 [2024-10-08 18:29:16.720208] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:03.753 [2024-10-08 18:29:16.730439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.753 qpair failed and we were unable to recover it. 00:24:03.753 [2024-10-08 18:29:16.730573] nvme_ctrlr.c:4536:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:24:03.753 A controller has encountered a failure and is being reset. 00:24:03.753 [2024-10-08 18:29:16.740658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.753 [2024-10-08 18:29:16.740718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.753 [2024-10-08 18:29:16.740778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.753 [2024-10-08 18:29:16.740812] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.753 [2024-10-08 18:29:16.740841] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c00 00:24:03.753 [2024-10-08 18:29:16.750738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:03.753 qpair failed and we were unable to recover it. 00:24:03.753 [2024-10-08 18:29:16.760633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.753 [2024-10-08 18:29:16.760688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.753 [2024-10-08 18:29:16.760721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.753 [2024-10-08 18:29:16.760741] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.753 [2024-10-08 18:29:16.760758] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c00 00:24:03.753 [2024-10-08 18:29:16.770519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:03.753 qpair failed and we were unable to recover it. 00:24:03.753 [2024-10-08 18:29:16.770696] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:24:03.753 [2024-10-08 18:29:16.772648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:03.753 Controller properly reset. 00:24:04.687 Write completed with error (sct=0, sc=8) 00:24:04.687 starting I/O failed 00:24:04.687 Read completed with error (sct=0, sc=8) 00:24:04.687 starting I/O failed 00:24:04.687 Write completed with error (sct=0, sc=8) 00:24:04.687 starting I/O failed 00:24:04.687 Write completed with error (sct=0, sc=8) 00:24:04.687 starting I/O failed 00:24:04.687 Read completed with error (sct=0, sc=8) 00:24:04.687 starting I/O failed 00:24:04.687 Read completed with error (sct=0, sc=8) 00:24:04.687 starting I/O failed 00:24:04.687 Write completed with error (sct=0, sc=8) 00:24:04.687 starting I/O failed 00:24:04.687 Read completed with error (sct=0, sc=8) 00:24:04.687 starting I/O failed 00:24:04.687 Write completed with error (sct=0, sc=8) 00:24:04.687 starting I/O failed 00:24:04.687 Read completed with error (sct=0, sc=8) 00:24:04.687 starting I/O failed 00:24:04.687 Write completed with error (sct=0, sc=8) 00:24:04.687 starting I/O failed 00:24:04.687 Write completed with error (sct=0, sc=8) 00:24:04.687 starting I/O failed 00:24:04.687 Read completed with error (sct=0, sc=8) 00:24:04.687 starting I/O failed 00:24:04.687 Read completed with error (sct=0, sc=8) 00:24:04.687 starting I/O failed 00:24:04.687 Write completed with error (sct=0, sc=8) 00:24:04.687 starting I/O failed 00:24:04.687 Write completed with error (sct=0, sc=8) 00:24:04.687 starting I/O failed 00:24:04.687 Read completed with error (sct=0, sc=8) 00:24:04.687 starting I/O failed 00:24:04.687 Write completed with error (sct=0, sc=8) 00:24:04.687 starting I/O failed 00:24:04.687 Read completed with error (sct=0, sc=8) 00:24:04.687 starting I/O failed 00:24:04.687 Read completed with error (sct=0, sc=8) 00:24:04.687 starting I/O failed 00:24:04.687 Write completed with error (sct=0, sc=8) 00:24:04.687 starting I/O failed 00:24:04.687 Write completed with error (sct=0, sc=8) 00:24:04.687 starting I/O failed 00:24:04.687 Read completed with error (sct=0, sc=8) 00:24:04.687 starting I/O failed 00:24:04.687 Read completed with error (sct=0, sc=8) 00:24:04.687 starting I/O failed 00:24:04.687 Write completed with error (sct=0, sc=8) 00:24:04.687 starting I/O failed 00:24:04.687 Read completed with error (sct=0, sc=8) 00:24:04.687 starting I/O failed 00:24:04.687 Read completed with error (sct=0, sc=8) 00:24:04.687 starting I/O failed 00:24:04.687 Write completed with error (sct=0, sc=8) 00:24:04.687 starting I/O failed 00:24:04.687 Write completed with error (sct=0, sc=8) 00:24:04.687 starting I/O failed 00:24:04.687 Write completed with error (sct=0, sc=8) 00:24:04.687 starting I/O failed 00:24:04.687 Write completed with error (sct=0, sc=8) 00:24:04.687 starting I/O failed 00:24:04.687 Read completed with error (sct=0, sc=8) 00:24:04.687 starting I/O failed 00:24:04.687 [2024-10-08 18:29:17.795929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:04.687 Initializing NVMe Controllers 00:24:04.687 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:04.687 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:04.687 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:24:04.687 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:24:04.687 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:24:04.687 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:24:04.687 Initialization complete. Launching workers. 00:24:04.687 Starting thread on core 1 00:24:04.687 Starting thread on core 2 00:24:04.687 Starting thread on core 3 00:24:04.687 Starting thread on core 0 00:24:04.687 18:29:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:24:04.687 00:24:04.687 real 0m12.587s 00:24:04.687 user 0m27.540s 00:24:04.687 sys 0m3.016s 00:24:04.687 18:29:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:04.687 18:29:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:04.687 ************************************ 00:24:04.687 END TEST nvmf_target_disconnect_tc2 00:24:04.687 ************************************ 00:24:04.945 18:29:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:24:04.945 18:29:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:24:04.945 18:29:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:04.945 18:29:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:04.945 18:29:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:04.945 ************************************ 00:24:04.945 START TEST nvmf_target_disconnect_tc3 00:24:04.945 ************************************ 00:24:04.945 18:29:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc3 00:24:04.945 18:29:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=3514208 00:24:04.945 18:29:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:24:04.945 18:29:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:24:06.847 18:29:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 3513158 00:24:06.847 18:29:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:24:08.224 Write completed with error (sct=0, sc=8) 00:24:08.224 starting I/O failed 00:24:08.224 Read completed with error (sct=0, sc=8) 00:24:08.224 starting I/O failed 00:24:08.224 Read completed with error (sct=0, sc=8) 00:24:08.224 starting I/O failed 00:24:08.224 Write completed with error (sct=0, sc=8) 00:24:08.224 starting I/O failed 00:24:08.224 Read completed with error (sct=0, sc=8) 00:24:08.224 starting I/O failed 00:24:08.224 Read completed with error (sct=0, sc=8) 00:24:08.224 starting I/O failed 00:24:08.224 Read completed with error (sct=0, sc=8) 00:24:08.224 starting I/O failed 00:24:08.224 Write completed with error (sct=0, sc=8) 00:24:08.224 starting I/O failed 00:24:08.224 Read completed with error (sct=0, sc=8) 00:24:08.224 starting I/O failed 00:24:08.224 Read completed with error (sct=0, sc=8) 00:24:08.224 starting I/O failed 00:24:08.224 Write completed with error (sct=0, sc=8) 00:24:08.224 starting I/O failed 00:24:08.224 Write completed with error (sct=0, sc=8) 00:24:08.224 starting I/O failed 00:24:08.224 Write completed with error (sct=0, sc=8) 00:24:08.224 starting I/O failed 00:24:08.224 Read completed with error (sct=0, sc=8) 00:24:08.224 starting I/O failed 00:24:08.224 Read completed with error (sct=0, sc=8) 00:24:08.224 starting I/O failed 00:24:08.224 Read completed with error (sct=0, sc=8) 00:24:08.224 starting I/O failed 00:24:08.224 Read completed with error (sct=0, sc=8) 00:24:08.224 starting I/O failed 00:24:08.224 Read completed with error (sct=0, sc=8) 00:24:08.224 starting I/O failed 00:24:08.224 Read completed with error (sct=0, sc=8) 00:24:08.224 starting I/O failed 00:24:08.224 Read completed with error (sct=0, sc=8) 00:24:08.224 starting I/O failed 00:24:08.224 Read completed with error (sct=0, sc=8) 00:24:08.224 starting I/O failed 00:24:08.224 Read completed with error (sct=0, sc=8) 00:24:08.224 starting I/O failed 00:24:08.224 Read completed with error (sct=0, sc=8) 00:24:08.224 starting I/O failed 00:24:08.224 Read completed with error (sct=0, sc=8) 00:24:08.224 starting I/O failed 00:24:08.224 Read completed with error (sct=0, sc=8) 00:24:08.224 starting I/O failed 00:24:08.225 Write completed with error (sct=0, sc=8) 00:24:08.225 starting I/O failed 00:24:08.225 Write completed with error (sct=0, sc=8) 00:24:08.225 starting I/O failed 00:24:08.225 Read completed with error (sct=0, sc=8) 00:24:08.225 starting I/O failed 00:24:08.225 Read completed with error (sct=0, sc=8) 00:24:08.225 starting I/O failed 00:24:08.225 Read completed with error (sct=0, sc=8) 00:24:08.225 starting I/O failed 00:24:08.225 Write completed with error (sct=0, sc=8) 00:24:08.225 starting I/O failed 00:24:08.225 Write completed with error (sct=0, sc=8) 00:24:08.225 starting I/O failed 00:24:08.225 [2024-10-08 18:29:21.130292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:08.792 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 3513158 Killed "${NVMF_APP[@]}" "$@" 00:24:08.792 18:29:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:24:08.792 18:29:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:08.792 18:29:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:08.792 18:29:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:08.792 18:29:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:08.792 18:29:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@507 -- # nvmfpid=3514697 00:24:08.792 18:29:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@508 -- # waitforlisten 3514697 00:24:08.792 18:29:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:08.792 18:29:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3514697 ']' 00:24:08.792 18:29:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.792 18:29:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:08.792 18:29:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.792 18:29:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:08.792 18:29:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:09.051 [2024-10-08 18:29:22.004553] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:24:09.051 [2024-10-08 18:29:22.004625] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:09.051 [2024-10-08 18:29:22.098614] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:09.051 Write completed with error (sct=0, sc=8) 00:24:09.051 starting I/O failed 00:24:09.051 Read completed with error (sct=0, sc=8) 00:24:09.051 starting I/O failed 00:24:09.051 Read completed with error (sct=0, sc=8) 00:24:09.051 starting I/O failed 00:24:09.051 Read completed with error (sct=0, sc=8) 00:24:09.051 starting I/O failed 00:24:09.051 Write completed with error (sct=0, sc=8) 00:24:09.051 starting I/O failed 00:24:09.051 Write completed with error (sct=0, sc=8) 00:24:09.051 starting I/O failed 00:24:09.051 Write completed with error (sct=0, sc=8) 00:24:09.051 starting I/O failed 00:24:09.051 Write completed with error (sct=0, sc=8) 00:24:09.051 starting I/O failed 00:24:09.051 Write completed with error (sct=0, sc=8) 00:24:09.051 starting I/O failed 00:24:09.051 Write completed with error (sct=0, sc=8) 00:24:09.051 starting I/O failed 00:24:09.051 Write completed with error (sct=0, sc=8) 00:24:09.051 starting I/O failed 00:24:09.051 Write completed with error (sct=0, sc=8) 00:24:09.051 starting I/O failed 00:24:09.051 Write completed with error (sct=0, sc=8) 00:24:09.051 starting I/O failed 00:24:09.051 Read completed with error (sct=0, sc=8) 00:24:09.051 starting I/O failed 00:24:09.051 Write completed with error (sct=0, sc=8) 00:24:09.051 starting I/O failed 00:24:09.051 Write completed with error (sct=0, sc=8) 00:24:09.051 starting I/O failed 00:24:09.051 Read completed with error (sct=0, sc=8) 00:24:09.051 starting I/O failed 00:24:09.051 Read completed with error (sct=0, sc=8) 00:24:09.051 starting I/O failed 00:24:09.051 Read completed with error (sct=0, sc=8) 00:24:09.051 starting I/O failed 00:24:09.051 Read completed with error (sct=0, sc=8) 00:24:09.051 starting I/O failed 00:24:09.051 Read completed with error (sct=0, sc=8) 00:24:09.051 starting I/O failed 00:24:09.051 Read completed with error (sct=0, sc=8) 00:24:09.051 starting I/O failed 00:24:09.051 Read completed with error (sct=0, sc=8) 00:24:09.051 starting I/O failed 00:24:09.051 Read completed with error (sct=0, sc=8) 00:24:09.051 starting I/O failed 00:24:09.051 Write completed with error (sct=0, sc=8) 00:24:09.051 starting I/O failed 00:24:09.051 Read completed with error (sct=0, sc=8) 00:24:09.051 starting I/O failed 00:24:09.051 Read completed with error (sct=0, sc=8) 00:24:09.051 starting I/O failed 00:24:09.051 Write completed with error (sct=0, sc=8) 00:24:09.051 starting I/O failed 00:24:09.051 Write completed with error (sct=0, sc=8) 00:24:09.051 starting I/O failed 00:24:09.051 Read completed with error (sct=0, sc=8) 00:24:09.051 starting I/O failed 00:24:09.051 Write completed with error (sct=0, sc=8) 00:24:09.051 starting I/O failed 00:24:09.051 Write completed with error (sct=0, sc=8) 00:24:09.051 starting I/O failed 00:24:09.052 [2024-10-08 18:29:22.135275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:09.052 [2024-10-08 18:29:22.183515] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:09.052 [2024-10-08 18:29:22.183560] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:09.052 [2024-10-08 18:29:22.183570] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:09.052 [2024-10-08 18:29:22.183579] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:09.052 [2024-10-08 18:29:22.183586] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:09.052 [2024-10-08 18:29:22.185054] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:24:09.052 [2024-10-08 18:29:22.185105] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:24:09.052 [2024-10-08 18:29:22.185136] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:24:09.052 [2024-10-08 18:29:22.185138] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:24:09.988 18:29:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:09.988 18:29:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@864 -- # return 0 00:24:09.988 18:29:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:09.988 18:29:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:09.988 18:29:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:09.988 18:29:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:09.988 18:29:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:09.989 18:29:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.989 18:29:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:09.989 Malloc0 00:24:09.989 18:29:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.989 18:29:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:24:09.989 18:29:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.989 18:29:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:09.989 [2024-10-08 18:29:22.972276] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2276620/0x2282100) succeed. 00:24:09.989 [2024-10-08 18:29:22.983511] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2277c60/0x22c37a0) succeed. 00:24:09.989 18:29:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.989 18:29:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:09.989 18:29:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.989 18:29:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:09.989 18:29:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.989 18:29:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:09.989 18:29:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.989 18:29:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:09.989 18:29:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.989 18:29:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:24:09.989 18:29:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.989 18:29:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:09.989 [2024-10-08 18:29:23.134216] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:24:09.989 Read completed with error (sct=0, sc=8) 00:24:09.989 starting I/O failed 00:24:09.989 Write completed with error (sct=0, sc=8) 00:24:09.989 starting I/O failed 00:24:09.989 Write completed with error (sct=0, sc=8) 00:24:09.989 starting I/O failed 00:24:09.989 Write completed with error (sct=0, sc=8) 00:24:09.989 starting I/O failed 00:24:09.989 Read completed with error (sct=0, sc=8) 00:24:09.989 starting I/O failed 00:24:09.989 Read completed with error (sct=0, sc=8) 00:24:09.989 starting I/O failed 00:24:09.989 Read completed with error (sct=0, sc=8) 00:24:09.989 starting I/O failed 00:24:09.989 Write completed with error (sct=0, sc=8) 00:24:09.989 starting I/O failed 00:24:09.989 Read completed with error (sct=0, sc=8) 00:24:09.989 starting I/O failed 00:24:09.989 Read completed with error (sct=0, sc=8) 00:24:09.989 starting I/O failed 00:24:09.989 Write completed with error (sct=0, sc=8) 00:24:09.989 starting I/O failed 00:24:09.989 Read completed with error (sct=0, sc=8) 00:24:09.989 starting I/O failed 00:24:09.989 Write completed with error (sct=0, sc=8) 00:24:09.989 starting I/O failed 00:24:09.989 Read completed with error (sct=0, sc=8) 00:24:09.989 starting I/O failed 00:24:09.989 Read completed with error (sct=0, sc=8) 00:24:09.989 starting I/O failed 00:24:09.989 Write completed with error (sct=0, sc=8) 00:24:09.989 starting I/O failed 00:24:09.989 Read completed with error (sct=0, sc=8) 00:24:09.989 starting I/O failed 00:24:09.989 Read completed with error (sct=0, sc=8) 00:24:09.989 starting I/O failed 00:24:09.989 Write completed with error (sct=0, sc=8) 00:24:09.989 starting I/O failed 00:24:09.989 Read completed with error (sct=0, sc=8) 00:24:09.989 starting I/O failed 00:24:09.989 Write completed with error (sct=0, sc=8) 00:24:09.989 starting I/O failed 00:24:09.989 Read completed with error (sct=0, sc=8) 00:24:09.989 starting I/O failed 00:24:09.989 Read completed with error (sct=0, sc=8) 00:24:09.989 starting I/O failed 00:24:09.989 Read completed with error (sct=0, sc=8) 00:24:09.989 starting I/O failed 00:24:09.989 Write completed with error (sct=0, sc=8) 00:24:09.989 starting I/O failed 00:24:09.989 Write completed with error (sct=0, sc=8) 00:24:09.989 starting I/O failed 00:24:09.989 Write completed with error (sct=0, sc=8) 00:24:09.989 starting I/O failed 00:24:09.989 Read completed with error (sct=0, sc=8) 00:24:09.989 starting I/O failed 00:24:09.989 Write completed with error (sct=0, sc=8) 00:24:09.989 starting I/O failed 00:24:09.989 Write completed with error (sct=0, sc=8) 00:24:09.989 starting I/O failed 00:24:09.989 Write completed with error (sct=0, sc=8) 00:24:09.989 starting I/O failed 00:24:09.989 Read completed with error (sct=0, sc=8) 00:24:09.989 starting I/O failed 00:24:09.989 18:29:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.989 18:29:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:24:09.989 18:29:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.989 18:29:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:09.989 [2024-10-08 18:29:23.140307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.989 18:29:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.989 18:29:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 3514208 00:24:11.367 Write completed with error (sct=0, sc=8) 00:24:11.367 starting I/O failed 00:24:11.367 Read completed with error (sct=0, sc=8) 00:24:11.367 starting I/O failed 00:24:11.367 Read completed with error (sct=0, sc=8) 00:24:11.367 starting I/O failed 00:24:11.367 Write completed with error (sct=0, sc=8) 00:24:11.367 starting I/O failed 00:24:11.367 Write completed with error (sct=0, sc=8) 00:24:11.367 starting I/O failed 00:24:11.367 Write completed with error (sct=0, sc=8) 00:24:11.367 starting I/O failed 00:24:11.367 Write completed with error (sct=0, sc=8) 00:24:11.367 starting I/O failed 00:24:11.367 Write completed with error (sct=0, sc=8) 00:24:11.367 starting I/O failed 00:24:11.367 Read completed with error (sct=0, sc=8) 00:24:11.367 starting I/O failed 00:24:11.367 Read completed with error (sct=0, sc=8) 00:24:11.367 starting I/O failed 00:24:11.367 Read completed with error (sct=0, sc=8) 00:24:11.367 starting I/O failed 00:24:11.367 Write completed with error (sct=0, sc=8) 00:24:11.367 starting I/O failed 00:24:11.367 Read completed with error (sct=0, sc=8) 00:24:11.367 starting I/O failed 00:24:11.367 Write completed with error (sct=0, sc=8) 00:24:11.367 starting I/O failed 00:24:11.367 Read completed with error (sct=0, sc=8) 00:24:11.367 starting I/O failed 00:24:11.367 Read completed with error (sct=0, sc=8) 00:24:11.367 starting I/O failed 00:24:11.367 Write completed with error (sct=0, sc=8) 00:24:11.367 starting I/O failed 00:24:11.367 Read completed with error (sct=0, sc=8) 00:24:11.367 starting I/O failed 00:24:11.367 Write completed with error (sct=0, sc=8) 00:24:11.367 starting I/O failed 00:24:11.367 Write completed with error (sct=0, sc=8) 00:24:11.367 starting I/O failed 00:24:11.367 Read completed with error (sct=0, sc=8) 00:24:11.367 starting I/O failed 00:24:11.367 Write completed with error (sct=0, sc=8) 00:24:11.367 starting I/O failed 00:24:11.367 Write completed with error (sct=0, sc=8) 00:24:11.367 starting I/O failed 00:24:11.367 Write completed with error (sct=0, sc=8) 00:24:11.367 starting I/O failed 00:24:11.367 Write completed with error (sct=0, sc=8) 00:24:11.367 starting I/O failed 00:24:11.367 Write completed with error (sct=0, sc=8) 00:24:11.367 starting I/O failed 00:24:11.367 Write completed with error (sct=0, sc=8) 00:24:11.367 starting I/O failed 00:24:11.367 Read completed with error (sct=0, sc=8) 00:24:11.367 starting I/O failed 00:24:11.367 Read completed with error (sct=0, sc=8) 00:24:11.367 starting I/O failed 00:24:11.367 Read completed with error (sct=0, sc=8) 00:24:11.367 starting I/O failed 00:24:11.367 Write completed with error (sct=0, sc=8) 00:24:11.367 starting I/O failed 00:24:11.367 Read completed with error (sct=0, sc=8) 00:24:11.367 starting I/O failed 00:24:11.367 [2024-10-08 18:29:24.145291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.367 [2024-10-08 18:29:24.146787] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:11.367 [2024-10-08 18:29:24.146808] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:11.367 [2024-10-08 18:29:24.146817] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002ced40 00:24:12.304 [2024-10-08 18:29:25.150650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:12.304 qpair failed and we were unable to recover it. 00:24:12.305 [2024-10-08 18:29:25.152764] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:12.305 [2024-10-08 18:29:25.152825] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:12.305 [2024-10-08 18:29:25.152854] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c00 00:24:13.241 [2024-10-08 18:29:26.156561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.241 qpair failed and we were unable to recover it. 00:24:13.241 [2024-10-08 18:29:26.157911] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:13.241 [2024-10-08 18:29:26.157930] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:13.241 [2024-10-08 18:29:26.157939] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c00 00:24:14.177 [2024-10-08 18:29:27.161819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.177 qpair failed and we were unable to recover it. 00:24:14.177 [2024-10-08 18:29:27.163213] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:14.177 [2024-10-08 18:29:27.163232] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:14.177 [2024-10-08 18:29:27.163244] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c00 00:24:15.114 [2024-10-08 18:29:28.167121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.114 qpair failed and we were unable to recover it. 00:24:15.114 [2024-10-08 18:29:28.168525] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:15.114 [2024-10-08 18:29:28.168543] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:15.114 [2024-10-08 18:29:28.168552] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c00 00:24:16.050 [2024-10-08 18:29:29.172376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.050 qpair failed and we were unable to recover it. 00:24:16.050 [2024-10-08 18:29:29.174669] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:16.050 [2024-10-08 18:29:29.174729] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:16.050 [2024-10-08 18:29:29.174758] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf7c0 00:24:17.427 [2024-10-08 18:29:30.178630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:17.427 qpair failed and we were unable to recover it. 00:24:17.427 [2024-10-08 18:29:30.180849] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:17.427 [2024-10-08 18:29:30.180912] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:17.427 [2024-10-08 18:29:30.180942] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:18.363 [2024-10-08 18:29:31.184875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:18.363 qpair failed and we were unable to recover it. 00:24:18.363 [2024-10-08 18:29:31.186359] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:18.363 [2024-10-08 18:29:31.186377] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:18.363 [2024-10-08 18:29:31.186385] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:19.301 [2024-10-08 18:29:32.190202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:19.301 qpair failed and we were unable to recover it. 00:24:19.301 [2024-10-08 18:29:32.192276] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:19.301 [2024-10-08 18:29:32.192336] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:19.301 [2024-10-08 18:29:32.192365] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf7c0 00:24:20.238 [2024-10-08 18:29:33.196028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-10-08 18:29:33.196147] nvme_ctrlr.c:4536:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:24:20.238 A controller has encountered a failure and is being reset. 00:24:20.238 Resorting to new failover address 192.168.100.9 00:24:20.238 [2024-10-08 18:29:33.196248] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:20.238 [2024-10-08 18:29:33.196321] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:24:20.239 [2024-10-08 18:29:33.228556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:20.239 Controller properly reset. 00:24:20.239 Initializing NVMe Controllers 00:24:20.239 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:20.239 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:20.239 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:24:20.239 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:24:20.239 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:24:20.239 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:24:20.239 Initialization complete. Launching workers. 00:24:20.239 Starting thread on core 1 00:24:20.239 Starting thread on core 2 00:24:20.239 Starting thread on core 3 00:24:20.239 Starting thread on core 0 00:24:20.239 18:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:24:20.239 00:24:20.239 real 0m15.366s 00:24:20.239 user 0m54.303s 00:24:20.239 sys 0m4.688s 00:24:20.239 18:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:20.239 18:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:20.239 ************************************ 00:24:20.239 END TEST nvmf_target_disconnect_tc3 00:24:20.239 ************************************ 00:24:20.239 18:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:24:20.239 18:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:24:20.239 18:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:20.239 18:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:24:20.239 18:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:24:20.239 18:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:24:20.239 18:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:24:20.239 18:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:20.239 18:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:24:20.239 rmmod nvme_rdma 00:24:20.239 rmmod nvme_fabrics 00:24:20.239 18:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:20.239 18:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:24:20.239 18:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:24:20.239 18:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@515 -- # '[' -n 3514697 ']' 00:24:20.239 18:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # killprocess 3514697 00:24:20.239 18:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 3514697 ']' 00:24:20.239 18:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 3514697 00:24:20.239 18:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:24:20.239 18:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:20.498 18:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3514697 00:24:20.498 18:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:24:20.498 18:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:24:20.498 18:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3514697' 00:24:20.498 killing process with pid 3514697 00:24:20.498 18:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 3514697 00:24:20.498 18:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 3514697 00:24:20.758 18:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:20.758 18:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:24:20.758 00:24:20.758 real 0m36.678s 00:24:20.758 user 2m10.546s 00:24:20.758 sys 0m13.644s 00:24:20.758 18:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:20.758 18:29:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:20.758 ************************************ 00:24:20.758 END TEST nvmf_target_disconnect 00:24:20.758 ************************************ 00:24:20.758 18:29:33 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:24:20.758 00:24:20.758 real 5m51.947s 00:24:20.758 user 12m53.240s 00:24:20.758 sys 1m45.274s 00:24:20.758 18:29:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:20.758 18:29:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.758 ************************************ 00:24:20.758 END TEST nvmf_host 00:24:20.758 ************************************ 00:24:20.758 18:29:33 nvmf_rdma -- nvmf/nvmf.sh@19 -- # [[ rdma = \t\c\p ]] 00:24:20.758 00:24:20.758 real 18m20.835s 00:24:20.758 user 43m8.786s 00:24:20.758 sys 5m54.758s 00:24:20.758 18:29:33 nvmf_rdma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:20.758 18:29:33 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:20.758 ************************************ 00:24:20.758 END TEST nvmf_rdma 00:24:20.758 ************************************ 00:24:20.758 18:29:33 -- spdk/autotest.sh@278 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:24:20.758 18:29:33 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:20.758 18:29:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:20.758 18:29:33 -- common/autotest_common.sh@10 -- # set +x 00:24:21.019 ************************************ 00:24:21.019 START TEST spdkcli_nvmf_rdma 00:24:21.019 ************************************ 00:24:21.019 18:29:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:24:21.019 * Looking for test storage... 00:24:21.019 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@1681 -- # lcov --version 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:21.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.019 --rc genhtml_branch_coverage=1 00:24:21.019 --rc genhtml_function_coverage=1 00:24:21.019 --rc genhtml_legend=1 00:24:21.019 --rc geninfo_all_blocks=1 00:24:21.019 --rc geninfo_unexecuted_blocks=1 00:24:21.019 00:24:21.019 ' 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:21.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.019 --rc genhtml_branch_coverage=1 00:24:21.019 --rc genhtml_function_coverage=1 00:24:21.019 --rc genhtml_legend=1 00:24:21.019 --rc geninfo_all_blocks=1 00:24:21.019 --rc geninfo_unexecuted_blocks=1 00:24:21.019 00:24:21.019 ' 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:21.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.019 --rc genhtml_branch_coverage=1 00:24:21.019 --rc genhtml_function_coverage=1 00:24:21.019 --rc genhtml_legend=1 00:24:21.019 --rc geninfo_all_blocks=1 00:24:21.019 --rc geninfo_unexecuted_blocks=1 00:24:21.019 00:24:21.019 ' 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:21.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.019 --rc genhtml_branch_coverage=1 00:24:21.019 --rc genhtml_function_coverage=1 00:24:21.019 --rc genhtml_legend=1 00:24:21.019 --rc geninfo_all_blocks=1 00:24:21.019 --rc geninfo_unexecuted_blocks=1 00:24:21.019 00:24:21.019 ' 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:21.019 18:29:34 spdkcli_nvmf_rdma -- scripts/common.sh@15 -- # shopt -s extglob 00:24:21.279 18:29:34 spdkcli_nvmf_rdma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:21.279 18:29:34 spdkcli_nvmf_rdma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:21.279 18:29:34 spdkcli_nvmf_rdma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:21.279 18:29:34 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.279 18:29:34 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.279 18:29:34 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.279 18:29:34 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:24:21.279 18:29:34 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.279 18:29:34 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # : 0 00:24:21.279 18:29:34 spdkcli_nvmf_rdma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:21.279 18:29:34 spdkcli_nvmf_rdma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:21.279 18:29:34 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:21.279 18:29:34 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:21.279 18:29:34 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:21.279 18:29:34 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:21.279 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:21.279 18:29:34 spdkcli_nvmf_rdma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:21.279 18:29:34 spdkcli_nvmf_rdma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:21.279 18:29:34 spdkcli_nvmf_rdma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:21.279 18:29:34 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:24:21.279 18:29:34 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:24:21.279 18:29:34 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:24:21.279 18:29:34 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:24:21.279 18:29:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:21.279 18:29:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:21.279 18:29:34 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:24:21.279 18:29:34 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3516340 00:24:21.279 18:29:34 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 3516340 00:24:21.279 18:29:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@831 -- # '[' -z 3516340 ']' 00:24:21.279 18:29:34 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:24:21.279 18:29:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.279 18:29:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:21.279 18:29:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:21.279 18:29:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:21.279 18:29:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:21.279 [2024-10-08 18:29:34.258716] Starting SPDK v25.01-pre git sha1 8ce2f3c7d / DPDK 24.03.0 initialization... 00:24:21.279 [2024-10-08 18:29:34.258778] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3516340 ] 00:24:21.279 [2024-10-08 18:29:34.341200] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:21.279 [2024-10-08 18:29:34.429767] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.279 [2024-10-08 18:29:34.429767] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.216 18:29:35 spdkcli_nvmf_rdma -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:22.216 18:29:35 spdkcli_nvmf_rdma -- common/autotest_common.sh@864 -- # return 0 00:24:22.216 18:29:35 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:24:22.216 18:29:35 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:22.216 18:29:35 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:22.216 18:29:35 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:24:22.216 18:29:35 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:24:22.216 18:29:35 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:24:22.216 18:29:35 spdkcli_nvmf_rdma -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:24:22.216 18:29:35 spdkcli_nvmf_rdma -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:22.216 18:29:35 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:22.216 18:29:35 spdkcli_nvmf_rdma -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:22.216 18:29:35 spdkcli_nvmf_rdma -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:22.216 18:29:35 spdkcli_nvmf_rdma -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:22.216 18:29:35 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:22.216 18:29:35 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:22.216 18:29:35 spdkcli_nvmf_rdma -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:22.216 18:29:35 spdkcli_nvmf_rdma -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:22.216 18:29:35 spdkcli_nvmf_rdma -- nvmf/common.sh@309 -- # xtrace_disable 00:24:22.216 18:29:35 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # pci_devs=() 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # net_devs=() 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # e810=() 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # local -ga e810 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # x722=() 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # local -ga x722 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # mlx=() 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # local -ga mlx 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:24:28.785 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:24:28.785 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:24:28.786 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:24:28.786 Found net devices under 0000:18:00.0: mlx_0_0 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:24:28.786 Found net devices under 0000:18:00.1: mlx_0_1 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@440 -- # is_hw=yes 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@446 -- # rdma_device_init 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # uname 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe ib_core 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@528 -- # allocate_nic_ips 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:24:28.786 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:28.786 link/ether 50:6b:4b:b4:ab:56 brd ff:ff:ff:ff:ff:ff 00:24:28.786 altname enp24s0f0np0 00:24:28.786 altname ens785f0np0 00:24:28.786 inet 192.168.100.8/24 scope global mlx_0_0 00:24:28.786 valid_lft forever preferred_lft forever 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:24:28.786 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:28.786 link/ether 50:6b:4b:b4:ab:57 brd ff:ff:ff:ff:ff:ff 00:24:28.786 altname enp24s0f1np1 00:24:28.786 altname ens785f1np1 00:24:28.786 inet 192.168.100.9/24 scope global mlx_0_1 00:24:28.786 valid_lft forever preferred_lft forever 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # return 0 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:24:28.786 192.168.100.9' 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:24:28.786 192.168.100.9' 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@483 -- # head -n 1 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:24:28.786 192.168.100.9' 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # tail -n +2 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # head -n 1 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:24:28.786 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:24:28.787 18:29:41 spdkcli_nvmf_rdma -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:24:28.787 18:29:41 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:24:28.787 18:29:41 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:24:28.787 18:29:41 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:28.787 18:29:41 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:28.787 18:29:41 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:24:28.787 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:24:28.787 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:24:28.787 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:24:28.787 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:24:28.787 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:24:28.787 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:24:28.787 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:28.787 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:24:28.787 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:24:28.787 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:24:28.787 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:28.787 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:24:28.787 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:24:28.787 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:28.787 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:24:28.787 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:24:28.787 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:24:28.787 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:28.787 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:28.787 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:24:28.787 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:24:28.787 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:24:28.787 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:24:28.787 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:28.787 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:24:28.787 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:24:28.787 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:24:28.787 ' 00:24:32.076 [2024-10-08 18:29:44.583622] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1800d60/0x180eb70) succeed. 00:24:32.076 [2024-10-08 18:29:44.593306] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1802440/0x188ec00) succeed. 00:24:33.013 [2024-10-08 18:29:45.988469] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:24:35.602 [2024-10-08 18:29:48.480447] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:24:37.506 [2024-10-08 18:29:50.639522] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:24:39.410 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:24:39.410 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:24:39.410 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:24:39.410 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:24:39.410 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:24:39.410 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:24:39.410 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:24:39.410 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:39.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:24:39.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:24:39.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:24:39.410 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:39.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:24:39.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:24:39.410 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:39.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:24:39.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:24:39.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:24:39.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:39.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:39.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:24:39.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:24:39.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:24:39.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:24:39.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:39.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:24:39.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:24:39.410 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:24:39.410 18:29:52 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:24:39.410 18:29:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:39.410 18:29:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:39.410 18:29:52 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:24:39.410 18:29:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:39.410 18:29:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:39.410 18:29:52 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:24:39.410 18:29:52 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:24:39.978 18:29:52 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:24:39.978 18:29:52 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:24:39.978 18:29:52 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:24:39.978 18:29:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:39.979 18:29:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:39.979 18:29:52 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:24:39.979 18:29:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:39.979 18:29:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:39.979 18:29:52 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:24:39.979 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:24:39.979 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:39.979 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:24:39.979 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:24:39.979 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:24:39.979 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:24:39.979 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:39.979 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:24:39.979 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:24:39.979 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:24:39.979 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:24:39.979 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:24:39.979 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:24:39.979 ' 00:24:46.547 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:24:46.547 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:24:46.547 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:24:46.547 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:24:46.547 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:24:46.547 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:24:46.547 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:24:46.547 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:24:46.547 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:24:46.547 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:24:46.547 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:24:46.547 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:24:46.547 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:24:46.547 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:24:46.547 18:29:58 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:24:46.547 18:29:58 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:46.547 18:29:58 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:46.547 18:29:58 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 3516340 00:24:46.547 18:29:58 spdkcli_nvmf_rdma -- common/autotest_common.sh@950 -- # '[' -z 3516340 ']' 00:24:46.547 18:29:58 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # kill -0 3516340 00:24:46.548 18:29:58 spdkcli_nvmf_rdma -- common/autotest_common.sh@955 -- # uname 00:24:46.548 18:29:58 spdkcli_nvmf_rdma -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:46.548 18:29:58 spdkcli_nvmf_rdma -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3516340 00:24:46.548 18:29:58 spdkcli_nvmf_rdma -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:46.548 18:29:58 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:46.548 18:29:58 spdkcli_nvmf_rdma -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3516340' 00:24:46.548 killing process with pid 3516340 00:24:46.548 18:29:58 spdkcli_nvmf_rdma -- common/autotest_common.sh@969 -- # kill 3516340 00:24:46.548 18:29:58 spdkcli_nvmf_rdma -- common/autotest_common.sh@974 -- # wait 3516340 00:24:46.548 18:29:58 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:24:46.548 18:29:58 spdkcli_nvmf_rdma -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:46.548 18:29:58 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # sync 00:24:46.548 18:29:58 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:24:46.548 18:29:58 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:24:46.548 18:29:58 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set +e 00:24:46.548 18:29:58 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:46.548 18:29:58 spdkcli_nvmf_rdma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:24:46.548 rmmod nvme_rdma 00:24:46.548 rmmod nvme_fabrics 00:24:46.548 18:29:59 spdkcli_nvmf_rdma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:46.548 18:29:59 spdkcli_nvmf_rdma -- nvmf/common.sh@128 -- # set -e 00:24:46.548 18:29:59 spdkcli_nvmf_rdma -- nvmf/common.sh@129 -- # return 0 00:24:46.548 18:29:59 spdkcli_nvmf_rdma -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:24:46.548 18:29:59 spdkcli_nvmf_rdma -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:46.548 18:29:59 spdkcli_nvmf_rdma -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:24:46.548 00:24:46.548 real 0m25.075s 00:24:46.548 user 0m55.289s 00:24:46.548 sys 0m6.168s 00:24:46.548 18:29:59 spdkcli_nvmf_rdma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:46.548 18:29:59 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:46.548 ************************************ 00:24:46.548 END TEST spdkcli_nvmf_rdma 00:24:46.548 ************************************ 00:24:46.548 18:29:59 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:24:46.548 18:29:59 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:24:46.548 18:29:59 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:24:46.548 18:29:59 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:24:46.548 18:29:59 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:24:46.548 18:29:59 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:24:46.548 18:29:59 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:24:46.548 18:29:59 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:24:46.548 18:29:59 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:24:46.548 18:29:59 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:24:46.548 18:29:59 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:24:46.548 18:29:59 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:24:46.548 18:29:59 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:24:46.548 18:29:59 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:24:46.548 18:29:59 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:24:46.548 18:29:59 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:24:46.548 18:29:59 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:24:46.548 18:29:59 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:46.548 18:29:59 -- common/autotest_common.sh@10 -- # set +x 00:24:46.548 18:29:59 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:24:46.548 18:29:59 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:24:46.548 18:29:59 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:24:46.548 18:29:59 -- common/autotest_common.sh@10 -- # set +x 00:24:51.827 INFO: APP EXITING 00:24:51.827 INFO: killing all VMs 00:24:51.827 INFO: killing vhost app 00:24:51.827 INFO: EXIT DONE 00:24:54.365 Waiting for block devices as requested 00:24:54.365 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:24:54.365 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:54.365 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:54.365 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:54.365 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:54.624 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:54.624 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:54.624 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:54.884 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:54.884 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:54.884 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:55.143 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:55.143 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:55.143 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:55.401 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:55.401 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:55.401 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:58.692 Cleaning 00:24:58.692 Removing: /var/run/dpdk/spdk0/config 00:24:58.692 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:24:58.692 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:24:58.692 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:24:58.692 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:24:58.692 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:24:58.692 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:24:58.692 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:24:58.692 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:24:58.692 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:24:58.692 Removing: /var/run/dpdk/spdk0/hugepage_info 00:24:58.692 Removing: /var/run/dpdk/spdk1/config 00:24:58.692 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:24:58.692 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:24:58.692 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:24:58.692 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:24:58.692 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:24:58.692 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:24:58.692 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:24:58.692 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:24:58.692 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:24:58.692 Removing: /var/run/dpdk/spdk1/hugepage_info 00:24:58.692 Removing: /var/run/dpdk/spdk1/mp_socket 00:24:58.952 Removing: /var/run/dpdk/spdk2/config 00:24:58.952 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:24:58.952 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:24:58.952 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:24:58.952 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:24:58.952 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:24:58.952 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:24:58.952 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:24:58.952 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:24:58.952 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:24:58.952 Removing: /var/run/dpdk/spdk2/hugepage_info 00:24:58.952 Removing: /var/run/dpdk/spdk3/config 00:24:58.952 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:24:58.952 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:24:58.952 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:24:58.952 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:24:58.952 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:24:58.952 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:24:58.952 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:24:58.952 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:24:58.952 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:24:58.952 Removing: /var/run/dpdk/spdk3/hugepage_info 00:24:58.952 Removing: /var/run/dpdk/spdk4/config 00:24:58.952 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:24:58.952 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:24:58.952 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:24:58.952 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:24:58.952 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:24:58.952 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:24:58.952 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:24:58.952 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:24:58.952 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:24:58.952 Removing: /var/run/dpdk/spdk4/hugepage_info 00:24:58.952 Removing: /dev/shm/bdevperf_trace.pid3301394 00:24:58.952 Removing: /dev/shm/bdev_svc_trace.1 00:24:58.952 Removing: /dev/shm/nvmf_trace.0 00:24:58.952 Removing: /dev/shm/spdk_tgt_trace.pid3262417 00:24:58.952 Removing: /var/run/dpdk/spdk0 00:24:58.952 Removing: /var/run/dpdk/spdk1 00:24:58.952 Removing: /var/run/dpdk/spdk2 00:24:58.952 Removing: /var/run/dpdk/spdk3 00:24:58.952 Removing: /var/run/dpdk/spdk4 00:24:58.952 Removing: /var/run/dpdk/spdk_pid3259982 00:24:58.952 Removing: /var/run/dpdk/spdk_pid3261172 00:24:58.952 Removing: /var/run/dpdk/spdk_pid3262417 00:24:58.952 Removing: /var/run/dpdk/spdk_pid3262972 00:24:58.952 Removing: /var/run/dpdk/spdk_pid3263726 00:24:58.952 Removing: /var/run/dpdk/spdk_pid3263938 00:24:58.952 Removing: /var/run/dpdk/spdk_pid3264810 00:24:58.952 Removing: /var/run/dpdk/spdk_pid3264893 00:24:58.952 Removing: /var/run/dpdk/spdk_pid3265207 00:24:58.952 Removing: /var/run/dpdk/spdk_pid3269646 00:24:58.952 Removing: /var/run/dpdk/spdk_pid3271067 00:24:58.952 Removing: /var/run/dpdk/spdk_pid3271447 00:24:58.952 Removing: /var/run/dpdk/spdk_pid3271700 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3271970 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3272366 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3272558 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3272751 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3273037 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3273831 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3276431 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3276804 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3277040 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3277218 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3277635 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3277813 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3278330 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3278404 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3278617 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3278803 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3279011 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3279098 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3279512 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3279720 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3280075 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3283622 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3287446 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3296591 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3297151 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3301394 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3301684 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3305378 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3310438 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3312697 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3321608 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3343525 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3346889 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3384482 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3388954 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3394039 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3401422 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3437651 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3438453 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3439432 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3440336 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3444662 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3451571 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3452296 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3453027 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3453750 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3454027 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3457953 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3457957 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3461901 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3462365 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3462736 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3463311 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3463456 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3467856 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3468305 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3472050 00:24:59.212 Removing: /var/run/dpdk/spdk_pid3474283 00:24:59.471 Removing: /var/run/dpdk/spdk_pid3479120 00:24:59.471 Removing: /var/run/dpdk/spdk_pid3488475 00:24:59.472 Removing: /var/run/dpdk/spdk_pid3488478 00:24:59.472 Removing: /var/run/dpdk/spdk_pid3507024 00:24:59.472 Removing: /var/run/dpdk/spdk_pid3507314 00:24:59.472 Removing: /var/run/dpdk/spdk_pid3512300 00:24:59.472 Removing: /var/run/dpdk/spdk_pid3512701 00:24:59.472 Removing: /var/run/dpdk/spdk_pid3514208 00:24:59.472 Removing: /var/run/dpdk/spdk_pid3516340 00:24:59.472 Clean 00:24:59.472 18:30:12 -- common/autotest_common.sh@1451 -- # return 0 00:24:59.472 18:30:12 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:24:59.472 18:30:12 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:59.472 18:30:12 -- common/autotest_common.sh@10 -- # set +x 00:24:59.472 18:30:12 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:24:59.472 18:30:12 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:59.472 18:30:12 -- common/autotest_common.sh@10 -- # set +x 00:24:59.472 18:30:12 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:24:59.472 18:30:12 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:24:59.472 18:30:12 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:24:59.472 18:30:12 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:24:59.472 18:30:12 -- spdk/autotest.sh@394 -- # hostname 00:24:59.472 18:30:12 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-34 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:24:59.731 geninfo: WARNING: invalid characters removed from testname! 00:25:21.675 18:30:32 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:22.243 18:30:35 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:24.149 18:30:37 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:26.056 18:30:38 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:27.436 18:30:40 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:29.407 18:30:42 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:31.313 18:30:44 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:31.314 18:30:44 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:25:31.314 18:30:44 -- common/autotest_common.sh@1681 -- $ lcov --version 00:25:31.314 18:30:44 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:25:31.314 18:30:44 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:25:31.314 18:30:44 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:25:31.314 18:30:44 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:25:31.314 18:30:44 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:25:31.314 18:30:44 -- scripts/common.sh@336 -- $ IFS=.-: 00:25:31.314 18:30:44 -- scripts/common.sh@336 -- $ read -ra ver1 00:25:31.314 18:30:44 -- scripts/common.sh@337 -- $ IFS=.-: 00:25:31.314 18:30:44 -- scripts/common.sh@337 -- $ read -ra ver2 00:25:31.314 18:30:44 -- scripts/common.sh@338 -- $ local 'op=<' 00:25:31.314 18:30:44 -- scripts/common.sh@340 -- $ ver1_l=2 00:25:31.314 18:30:44 -- scripts/common.sh@341 -- $ ver2_l=1 00:25:31.314 18:30:44 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:25:31.314 18:30:44 -- scripts/common.sh@344 -- $ case "$op" in 00:25:31.314 18:30:44 -- scripts/common.sh@345 -- $ : 1 00:25:31.314 18:30:44 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:25:31.314 18:30:44 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:31.314 18:30:44 -- scripts/common.sh@365 -- $ decimal 1 00:25:31.314 18:30:44 -- scripts/common.sh@353 -- $ local d=1 00:25:31.314 18:30:44 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:25:31.314 18:30:44 -- scripts/common.sh@355 -- $ echo 1 00:25:31.314 18:30:44 -- scripts/common.sh@365 -- $ ver1[v]=1 00:25:31.314 18:30:44 -- scripts/common.sh@366 -- $ decimal 2 00:25:31.314 18:30:44 -- scripts/common.sh@353 -- $ local d=2 00:25:31.314 18:30:44 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:25:31.314 18:30:44 -- scripts/common.sh@355 -- $ echo 2 00:25:31.314 18:30:44 -- scripts/common.sh@366 -- $ ver2[v]=2 00:25:31.314 18:30:44 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:25:31.314 18:30:44 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:25:31.314 18:30:44 -- scripts/common.sh@368 -- $ return 0 00:25:31.314 18:30:44 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:31.314 18:30:44 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:25:31.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.314 --rc genhtml_branch_coverage=1 00:25:31.314 --rc genhtml_function_coverage=1 00:25:31.314 --rc genhtml_legend=1 00:25:31.314 --rc geninfo_all_blocks=1 00:25:31.314 --rc geninfo_unexecuted_blocks=1 00:25:31.314 00:25:31.314 ' 00:25:31.314 18:30:44 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:25:31.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.314 --rc genhtml_branch_coverage=1 00:25:31.314 --rc genhtml_function_coverage=1 00:25:31.314 --rc genhtml_legend=1 00:25:31.314 --rc geninfo_all_blocks=1 00:25:31.314 --rc geninfo_unexecuted_blocks=1 00:25:31.314 00:25:31.314 ' 00:25:31.314 18:30:44 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:25:31.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.314 --rc genhtml_branch_coverage=1 00:25:31.314 --rc genhtml_function_coverage=1 00:25:31.314 --rc genhtml_legend=1 00:25:31.314 --rc geninfo_all_blocks=1 00:25:31.314 --rc geninfo_unexecuted_blocks=1 00:25:31.314 00:25:31.314 ' 00:25:31.314 18:30:44 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:25:31.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.314 --rc genhtml_branch_coverage=1 00:25:31.314 --rc genhtml_function_coverage=1 00:25:31.314 --rc genhtml_legend=1 00:25:31.314 --rc geninfo_all_blocks=1 00:25:31.314 --rc geninfo_unexecuted_blocks=1 00:25:31.314 00:25:31.314 ' 00:25:31.314 18:30:44 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:31.314 18:30:44 -- scripts/common.sh@15 -- $ shopt -s extglob 00:25:31.314 18:30:44 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:25:31.314 18:30:44 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:31.314 18:30:44 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:31.314 18:30:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.314 18:30:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.314 18:30:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.314 18:30:44 -- paths/export.sh@5 -- $ export PATH 00:25:31.314 18:30:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.314 18:30:44 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:25:31.314 18:30:44 -- common/autobuild_common.sh@486 -- $ date +%s 00:25:31.314 18:30:44 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728405044.XXXXXX 00:25:31.314 18:30:44 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728405044.WvwnOv 00:25:31.314 18:30:44 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:25:31.314 18:30:44 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:25:31.314 18:30:44 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:25:31.314 18:30:44 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:25:31.314 18:30:44 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:25:31.314 18:30:44 -- common/autobuild_common.sh@502 -- $ get_config_params 00:25:31.314 18:30:44 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:25:31.314 18:30:44 -- common/autotest_common.sh@10 -- $ set +x 00:25:31.314 18:30:44 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:25:31.314 18:30:44 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:25:31.314 18:30:44 -- pm/common@17 -- $ local monitor 00:25:31.314 18:30:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:31.314 18:30:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:31.314 18:30:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:31.314 18:30:44 -- pm/common@21 -- $ date +%s 00:25:31.314 18:30:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:31.314 18:30:44 -- pm/common@21 -- $ date +%s 00:25:31.314 18:30:44 -- pm/common@25 -- $ sleep 1 00:25:31.314 18:30:44 -- pm/common@21 -- $ date +%s 00:25:31.314 18:30:44 -- pm/common@21 -- $ date +%s 00:25:31.314 18:30:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728405044 00:25:31.314 18:30:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728405044 00:25:31.314 18:30:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728405044 00:25:31.314 18:30:44 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728405044 00:25:31.314 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728405044_collect-cpu-load.pm.log 00:25:31.314 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728405044_collect-vmstat.pm.log 00:25:31.314 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728405044_collect-cpu-temp.pm.log 00:25:31.314 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728405044_collect-bmc-pm.bmc.pm.log 00:25:32.252 18:30:45 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:25:32.252 18:30:45 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:25:32.252 18:30:45 -- spdk/autopackage.sh@14 -- $ timing_finish 00:25:32.252 18:30:45 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:32.252 18:30:45 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:25:32.252 18:30:45 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:25:32.252 18:30:45 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:25:32.253 18:30:45 -- pm/common@29 -- $ signal_monitor_resources TERM 00:25:32.253 18:30:45 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:25:32.253 18:30:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:32.253 18:30:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:25:32.253 18:30:45 -- pm/common@44 -- $ pid=3529993 00:25:32.253 18:30:45 -- pm/common@50 -- $ kill -TERM 3529993 00:25:32.253 18:30:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:32.253 18:30:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:25:32.253 18:30:45 -- pm/common@44 -- $ pid=3529995 00:25:32.253 18:30:45 -- pm/common@50 -- $ kill -TERM 3529995 00:25:32.253 18:30:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:32.253 18:30:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:25:32.253 18:30:45 -- pm/common@44 -- $ pid=3529997 00:25:32.253 18:30:45 -- pm/common@50 -- $ kill -TERM 3529997 00:25:32.253 18:30:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:32.253 18:30:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:25:32.253 18:30:45 -- pm/common@44 -- $ pid=3530020 00:25:32.253 18:30:45 -- pm/common@50 -- $ sudo -E kill -TERM 3530020 00:25:32.253 + [[ -n 3186993 ]] 00:25:32.253 + sudo kill 3186993 00:25:32.262 [Pipeline] } 00:25:32.278 [Pipeline] // stage 00:25:32.283 [Pipeline] } 00:25:32.298 [Pipeline] // timeout 00:25:32.303 [Pipeline] } 00:25:32.317 [Pipeline] // catchError 00:25:32.322 [Pipeline] } 00:25:32.336 [Pipeline] // wrap 00:25:32.343 [Pipeline] } 00:25:32.356 [Pipeline] // catchError 00:25:32.365 [Pipeline] stage 00:25:32.367 [Pipeline] { (Epilogue) 00:25:32.380 [Pipeline] catchError 00:25:32.382 [Pipeline] { 00:25:32.394 [Pipeline] echo 00:25:32.395 Cleanup processes 00:25:32.401 [Pipeline] sh 00:25:32.688 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:25:32.688 3530127 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/sdr.cache 00:25:32.688 3530396 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:25:32.702 [Pipeline] sh 00:25:32.988 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:25:32.988 ++ grep -v 'sudo pgrep' 00:25:32.988 ++ awk '{print $1}' 00:25:32.988 + sudo kill -9 3530127 00:25:33.000 [Pipeline] sh 00:25:33.285 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:25:41.422 [Pipeline] sh 00:25:41.709 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:25:41.709 Artifacts sizes are good 00:25:41.724 [Pipeline] archiveArtifacts 00:25:41.731 Archiving artifacts 00:25:41.837 [Pipeline] sh 00:25:42.124 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-phy-autotest 00:25:42.139 [Pipeline] cleanWs 00:25:42.150 [WS-CLEANUP] Deleting project workspace... 00:25:42.150 [WS-CLEANUP] Deferred wipeout is used... 00:25:42.157 [WS-CLEANUP] done 00:25:42.159 [Pipeline] } 00:25:42.172 [Pipeline] // catchError 00:25:42.183 [Pipeline] sh 00:25:42.509 + logger -p user.info -t JENKINS-CI 00:25:42.521 [Pipeline] } 00:25:42.536 [Pipeline] // stage 00:25:42.541 [Pipeline] } 00:25:42.555 [Pipeline] // node 00:25:42.559 [Pipeline] End of Pipeline 00:25:42.596 Finished: SUCCESS